I described the Breath I/O project earlier in which lungs will be used as vessels for video and sound.
I've been wondering if it would be possible to annotate video with emotional content so when they are mapped onto the lungs the breathing style could vary to match or enhance the emotion. Since at this point we are not planning on using dynamic video, it may be feasible and desirable to manually annotate the video with some state parameter corresponding to the type of breathing that would be appropriate. From there the lungs could alter their breathing and seamlessly blend between states.
A quick search later, I just found the specs for MPEG 7 video standard. It allows for just the kind of annotation that we would need. I'm not sure what the status is on mpeg7 and whether any of the applications we are currently using (Virtools, Touch Designer, Field) will even play it. It's been around for a bit so perhaps it's more integrated than I think. Regardless, we could use some of the concepts in the specifications and roll our own little annotated video player.
The more I think about it, the more I can see interesting uses to the new annotation standard. Obviously many people have been thinking about this for a while on the standards side, but its interesting to me that it hasn't hit the mainstream yet in terms of all the possible uses and proper tools to support those. The Wikipedia entry contains some examples of what people have done with the standard so far. The IBM video annotator VideoAnnEx seems like it might be interesting for us. There is also a Java library to extract annotation that we could use in Field.
There may be more appropriate standards or ways to do this, I'm not sure. It made me happy to see mpeg7. It seems like a good starting point to start making enquiries about the field.
A song for this post.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.