How to never forget: The story behind Kevin’s wearable cameras

UPDATE: Guest lecturing for a media studies class, I respond to questions relating to blogging and lifecasting (video and slides).

Last month, I gave a few of my close buddies an inside look to my sick brain motivation for using the wearable video cameras. You’ve seen me with it before, heard my “social cyborg” explanations, but it’s about time you got a demo, and the real story behind it.

If you just want me to blow your mind, watch the frame-by-frame taggable/comment-able video above. If you can’t carry on living without understanding why or how this whole thing works, then read on…

My first public use of helmetcamShoulder-mounted all-seeing camera

As the Rambling Librarian would recall, I didn’t look too hawt when I first donned the wearable cam at the WebSG meetup #2. Since then I’ve tried various slingbag/backpack housing, and over time, I think I’ve improved my karma with it. On the left is ver. 1 of my wearable cam setup (head mounted), then ver. 3 at Nexus 2007 (heavy shoulder bag mount). Ver. 2 was mounted on a medium-sized Incase slingbag which was quite ideal for mobility, but it was ver. 4 which took the cake.

LifeBlog Unit ver. 4

Introducing the sousveillance backpack…
This consisted of mountings on my slim Ben Sherman backpack, which allowed for modular add-ons, such as my Archos 704 wifi DVR, and a wide-angle Logitech Notebook Pro webcam connected to the Sony UX UMPC for lifecasting (live video feed). With two cameras mounted on both shoulders, more power was needed for sustained recordings, thus a Tekkeon particle accelerator portable battery pack was added (manufacturer is sending me more adapters!). My most recent additions include Elecom’s squid-like USB hub, as well as PCI’s High-Gain USB Wifi Antenna for more consistent network bandwidth. Friends have suggested a kinetic-powered system since I’d be moving around a lot, but I might realistically add on a solar panel instead.

New additions to the sousveillance backpack - 5

On Active Camouflage…
Now in its fourth iteration, the wearable camera is flushed with my body profile, making it less threatening to my subjects. It’s now mounted discreetly on my shoulder, and only five people realized this out of my month long usage. I’ve used it everywhere from the trains, to buses, to streetwalking, to conversations with strangers. If you study how active camouflage works, one of the keystones includes not breaking your shape’s profile. Instead of an aggressive shooting posture most cameramen have, I’m afforded the ability to shoot passively, maintaining natural body movement while recording.

New additions to the sousveillance backpack - 2New additions to the sousveillance backpack - 1
The left DVR camera more flushed than the right lifecasting camera. All cameras to be fully concealed soon.

Now I didn’t want to go public with this earlier because the online service wasn’t ready for prime time. What you’ve seen so far is the hardware… there’s the software part which is even more important, since I’d end up with a ton of video footage that’s unsearchable, thus unusable.

This is a revolutionary video service I’ve been using for a while now, but it had the downside of not streaming properly in Singapore (or anywhere far from the States). Initial tests with my media socialist buddies revealed a performance index of 4 seconds of video, followed by a minute of buffering. Rinse and repeat as Lucian illustrated.

More recently, this has changed. I (and a few others) have been begging Viddler’s developers to “fix” this by offering the typical Youtube progressive video buffering, over the protective streaming buffering which Viddler was promoting (which prevents downloading of the video). Since they’ve recently added the progressive option, the above video becomes so much more viewable and usable. If your bandwidth isn’t up to par, you can simply pause, grab coffee and come back to watch the silky smooth video.

There’s a killer feature about Viddler though. If you haven’t noticed, Viddler adds something no other video sharing services has… “hyperlinkable + taggable + comment-able” keyframes. See the dots on the timeline? The different shades indicate user generated links, tags and comments. At any point in the video, you can hit that “green plus” on the playhead to add to the video, even comments could be in video format, thus you could augment my video with your video!

By adding these various forms of metadata to keyframes within videos, you can then search within videos. Do you see where I’m getting at?

As explained by the folks at Viddler:
Since Viddler searches inside the content of videos, users have a lot of flexibility when it comes to finding new material to watch. Search for any object, person, or place and the results will be staring you in the face with exactly what you wanted.

My geek friends will know that I’ve been so hard up for something like this.

I see this as an evolution of video as a mallable “hyperlinkable” medium, just as blog hypertext and annotated Flickr photos work. In Quicktime we’ve been able to do this using SMIL, but the process has been quite trying. If you want to understand this deeper, I did an interview with Prof. Adrian Miles last year at AoIR, as he explains “hyperlinkable video concept” best.

Before reaching Viddler, I’ve tried many incarnations of video search engines, mind you those that search within videos. We’ve also seen some services which offer automatic speech to text transcription, allowing you to finally search video with some accuracy… for example: Podzinger, Blinkx, MetaVid (searches Closed Captions), and HyperTranscribe.

As I’ve illustrated, there has been two approaches in making video more useful: man and machine. Instead of relying on machines to tell us what’s what in a video (earlier incarnations), crowdsourcing humans works just as well (i.e. Viddler). This man-machine comparison is similar to how vs. Pandora works. Like how Flickr treat images as public canvases, Viddler lets you work within the video as if it were a blog. At particular frames of a video, you could tag, drop a text comment or even video response. That includes your users, the public. I’ve tried this out and users who viewed my videos have added their input on them, thereby adding on to the searchability of significant parts of a clip.

Now, why am I doing recording my life?
I’m heavily influenced by the movie “Strange Days” for my persistent video approach, less so about broadcasting or lifecasting (which would be happenstance).

Strange Days movie poster

I’m more into the idea of archiving and transferring experiences. Video happens to be the richest media we have right now, until we have full sensory recording gear as seen in the movie. I also want it to be a way for me to remember everything in my life precisely as it happened. It’s a cybernetic way of “never forgetting”, not unlike what Microsoft’s Gordon Bell is doing.

"Never Forget" article in Fast Company - 2"Never Forget" article in Fast Company - 1

On the richness of experiences…
Though videos can “suggest” more sensory transfer, this is not to say that books are not experientially rich per say, since the the reader’s “social imagination” would often depict an experience more thoroughly (depending on reader’s knowledge and vocabulary). For instance, when you watch a horror movie, notice that the less you “see”, the more scared you become. In essence, the sensation of fear is really constructed in the mind. Though my experience richness vs. media richness collision counterpoints my experience-sharing video idea, this shouldn’t deter me from trying to bridge this gap in experience transfer.

There’s plenty more to say about this… there could be incredible amounts of applications for a technology concept of this sort and I welcome suggestions (yes, voyeurism is one). In the mean time, I invite you to experience the future by tagging and commenting on key moments in the sample video above. I am planning to launch another blog just for all my life recordings, but we’ll see how this goes first.

Quote of the Day: “What is Real? How Do you Define Real? If you are talking about what you can feel, what you can smell, what you can taste and see, then real is simply electrical signals interpreted by your brain.” (Anyone know the source?)

33 thoughts on “How to never forget: The story behind Kevin’s wearable cameras

  1. I know who said that (and who he was paraphrasing!). Have you been tracking on some of the published work in this area? Lot’s of people working on memory prosthetics.

    Viddler is *mostly* working for me, but video froze after several minutes (audio remained).

  2. Jer: Actually you mentioned something I didn’t disclose… look carefully on my left strap. There’s an iPod shuffle and speakers. That answers the big thing I have too… on adding a soundtrack to my life 🙂

    Alex: Any pointers? I only found Gorden Bell’s work so far…

  3. I’ve always wanted a soundtrack to my life albeit in realtime. But I guess the iPod shuffle works too. Now if only it could track what you were doing and play the appropriate music. That would be awesome.

  4. @jer: That might be possible… using existing music recommendation engines, sensors detecting light, wind velocity, ambient temperature and even emotional state, could be used in a system to pick the appropriate music for the moment. It sounds like an entirely new project altogether, but it might be worth doing. Anything with music could mean a potential business altogether.

  5. @Jacelyn: I was thinking of loaning it to a few bloggers, just to let you guys capture a day in your life. Will see how 🙂

  6. Off the top of my head, that quote, if i’m not mistaken, is what Morpheus tells Neo aka. MISTURRR Anderson in the first Matrix.

    Please tell me I’m right.

  7. Too bad I need an account to leave a comment. I have so many things to say in the video!!! OK, maybe it’s a good thing it requires a login.. : )

  8. Aloha, Kevin. Hawaii lifecaster here. Great stuff. I haven’t worked half as hard on my setup, but I’m still having lots of fun.

    On the ‘soundtrack’ question, how are you handling commercial music (i.e. from your iPod) and licensing requirements and restrictions (as what we’re doing is essentially broadcasting)? I got ribbed while driving around listening to the radio early on, so I’ve made it a point to keep a playlist of Creative Commons and other “safe” music when lifecasting.

    Fortunately, I’m also a podcaster, and also play the music of some of the indie artists I personally know and who’ve also given permission to play their stuff.

    It’s hard to control ambient audio, from a restaurant jukebox to the MLB game on a nearby pub’s TV, and stuff like watching TV or going to the movies is already pretty complicated. It’ll be interesting to see how this all plays out.

    Keep up the great work!

  9. @Ryan: Saw your lifecasting site… pretty nifty. I’m not exactly lifecasting myself, but rather capturing segments of my life. I can imagine environmental music being a problem… it’s not like you can avoid every instance of it. Would be neat if you could easily replace the sounds with something else. 🙂

  10. In the side-by-side pics of your camera setup, I see you have the Logitech quickcam on the right, but what’s the brand of the one on the left that you’ve camouflaged?

Comments are closed.