UPDATE: Guest lecturing for a media studies class, I respond to questions relating to blogging and lifecasting (video and slides).
Last month, I gave a few of my close buddies an inside look to my
sick brain motivation for using the wearable video cameras. You’ve seen me with it before, heard my “social cyborg” explanations, but it’s about time you got a demo, and the real story behind it.
If you just want me to blow your mind, watch the frame-by-frame taggable/comment-able video above. If you can’t carry on living without understanding why or how this whole thing works, then read on…
As the Rambling Librarian would recall, I didn’t look too hawt when I first donned the wearable cam at the WebSG meetup #2. Since then I’ve tried various slingbag/backpack housing, and over time, I think I’ve improved my karma with it. On the left is ver. 1 of my wearable cam setup (head mounted), then ver. 3 at Nexus 2007 (heavy shoulder bag mount). Ver. 2 was mounted on a medium-sized Incase slingbag which was quite ideal for mobility, but it was ver. 4 which took the cake.
Introducing the sousveillance backpack…
This consisted of mountings on my slim Ben Sherman backpack, which allowed for modular add-ons, such as my Archos 704 wifi DVR, and a wide-angle Logitech Notebook Pro webcam connected to the Sony UX UMPC for lifecasting (live video feed). With two cameras mounted on both shoulders, more power was needed for sustained recordings, thus a Tekkeon
particle accelerator portable battery pack was added (manufacturer is sending me more adapters!). My most recent additions include Elecom’s squid-like USB hub, as well as PCI’s High-Gain USB Wifi Antenna for more consistent network bandwidth. Friends have suggested a kinetic-powered system since I’d be moving around a lot, but I might realistically add on a solar panel instead.
On Active Camouflage…
Now in its fourth iteration, the wearable camera is flushed with my body profile, making it less threatening to my subjects. It’s now mounted discreetly on my shoulder, and only five people realized this out of my month long usage. I’ve used it everywhere from the trains, to buses, to streetwalking, to conversations with strangers. If you study how active camouflage works, one of the keystones includes not breaking your shape’s profile. Instead of an aggressive shooting posture most cameramen have, I’m afforded the ability to shoot passively, maintaining natural body movement while recording.
Now I didn’t want to go public with this earlier because the online service wasn’t ready for prime time. What you’ve seen so far is the hardware… there’s the software part which is even more important, since I’d end up with a ton of video footage that’s unsearchable, thus unusable.
This is a revolutionary video service I’ve been using for a while now, but it had the downside of not streaming properly in Singapore (or anywhere far from the States). Initial tests with my media socialist buddies revealed a performance index of 4 seconds of video, followed by a minute of buffering. Rinse and repeat as Lucian illustrated.
More recently, this has changed. I (and a few others) have been begging Viddler’s developers to “fix” this by offering the typical Youtube progressive video buffering, over the protective streaming buffering which Viddler was promoting (which prevents downloading of the video). Since they’ve recently added the progressive option, the above video becomes so much more viewable and usable. If your bandwidth isn’t up to par, you can simply pause, grab coffee and come back to watch the silky smooth video.
There’s a killer feature about Viddler though. If you haven’t noticed, Viddler adds something no other video sharing services has… “hyperlinkable + taggable + comment-able” keyframes. See the dots on the timeline? The different shades indicate user generated links, tags and comments. At any point in the video, you can hit that “green plus” on the playhead to add to the video, even comments could be in video format, thus you could augment my video with your video!
By adding these various forms of metadata to keyframes within videos, you can then search within videos. Do you see where I’m getting at?
As explained by the folks at Viddler:
Since Viddler searches inside the content of videos, users have a lot of flexibility when it comes to finding new material to watch. Search for any object, person, or place and the results will be staring you in the face with exactly what you wanted.
My geek friends will know that I’ve been so hard up for something like this.
I see this as an evolution of video as a mallable “hyperlinkable” medium, just as blog hypertext and annotated Flickr photos work. In Quicktime we’ve been able to do this using SMIL, but the process has been quite trying. If you want to understand this deeper, I did an interview with Prof. Adrian Miles last year at AoIR, as he explains “hyperlinkable video concept” best.
Before reaching Viddler, I’ve tried many incarnations of video search engines, mind you those that search within videos. We’ve also seen some services which offer automatic speech to text transcription, allowing you to finally search video with some accuracy… for example: Podzinger, Blinkx, MetaVid (searches Closed Captions), and HyperTranscribe.
As I’ve illustrated, there has been two approaches in making video more useful: man and machine. Instead of relying on machines to tell us what’s what in a video (earlier incarnations), crowdsourcing humans works just as well (i.e. Viddler). This man-machine comparison is similar to how Last.fm vs. Pandora works. Like how Flickr treat images as public canvases, Viddler lets you work within the video as if it were a blog. At particular frames of a video, you could tag, drop a text comment or even video response. That includes your users, the public. I’ve tried this out and users who viewed my videos have added their input on them, thereby adding on to the searchability of significant parts of a clip.
Now, why am I doing recording my life?
I’m heavily influenced by the movie “Strange Days” for my persistent video approach, less so about broadcasting or lifecasting (which would be happenstance).
I’m more into the idea of archiving and transferring experiences. Video happens to be the richest media we have right now, until we have full sensory recording gear as seen in the movie. I also want it to be a way for me to remember everything in my life precisely as it happened. It’s a cybernetic way of “never forgetting”, not unlike what Microsoft’s Gordon Bell is doing.
On the richness of experiences…
Though videos can “suggest” more sensory transfer, this is not to say that books are not experientially rich per say, since the the reader’s “social imagination” would often depict an experience more thoroughly (depending on reader’s knowledge and vocabulary). For instance, when you watch a horror movie, notice that the less you “see”, the more scared you become. In essence, the sensation of fear is really constructed in the mind. Though my experience richness vs. media richness collision counterpoints my experience-sharing video idea, this shouldn’t deter me from trying to bridge this gap in experience transfer.
There’s plenty more to say about this… there could be incredible amounts of applications for a technology concept of this sort and I welcome suggestions (yes, voyeurism is one). In the mean time, I invite you to experience the future by tagging and commenting on key moments in the sample video above. I am planning to launch another blog just for all my life recordings, but we’ll see how this goes first.
Quote of the Day: “What is Real? How Do you Define Real? If you are talking about what you can feel, what you can smell, what you can taste and see, then real is simply electrical signals interpreted by your brain.” (Anyone know the source?)