“As an MIT grad student, Pranav Mistry invented SixthSense, a wearable device that enables new interactions between the real world and the world of data”. – TED.com
Listen to the MP3 Audio: Pranav Mistry_ The thrilling potential of SixthSense technology
We grew up interacting with the physical objects around us. There are an enormous number of them that we use every day. Unlike most of our computing devices, these objects are much more fun to use.
When you talk about objects, one another thing automatically comes attached to that thing, and that is gestures: how we manipulate these objects, how we use these objects in everyday life. We use gestures not only to interact with these objects, but we also use them to interact with each other. Like a gesture of Namaste! maybe, to respect someone, or maybe — in India I don’t need to teach a kid that this means four runs in cricket. It comes as a part of our everyday learning.
So, I am very interested, from the beginning, that how — how can our knowledge about everyday objects and gestures, and how we use these objects, can be leveraged to our interactions with the digital world. Rather than using a keyboard and mouse, why can I not use my computer in the same way that I interact in the physical world?
So, I started this exploration around eight years back, and it literally started with a mouse on my desk. Rather than using that for my computer, I actually opened it. Most of you might be aware that, in those days, the mouse used to come with a ball inside, and there were two rollers that actually guide the computer where the ball is moving, and, accordingly, where the mouse is moving.
So, I was interested in these two rollers, and I actually wanted more, so I borrowed another mouse from a friend — never returned to him — and I had now four rollers. What interestingly I did with these rollers is, basically, I took them off from these mouses and then put them in one line. It had some strings and pulleys and some springs. What I got is basically a gesture interface device that actually acts as a motion-sensing device made out of $2. So, here, whatever movement I do in my physical world is actually replicated inside the digital world just using this small device that I made, around eight years back, in 2000.
Because I was interested in integrating these two worlds, I thought of sticky notes. I thought, “Why can I not connect the normal interface of a physical sticky note to the digital world?” A message written on a sticky note to my mom on a paper can come to an SMS, or maybe a meeting reminder automatically syncs with my digital calendar — a to-do list that automatically syncs with you. But you can also search in the digital world, or maybe you can write a query, saying, “What is Dr. Smith’s address?” and this small system actually prints it out — so it actually acts like a paper input-output system, just made out of paper.
In another exploration, I thought of making a pen that can draw in three dimensions. So, I implemented this pen that can help designers and architects not only think in three dimensions, but they can actually draw so that it’s more intuitive to use that way.
Then I thought, “Why not make a Google Map, but in the physical world?” Rather than typing a keyword to find something, I put my objects on top of it. If I put a boarding pass, it will show me where is your flight gate. A coffee cup will show you where you can find more coffee, or where you can trash the cup.
So, these were some of the earlier explorations I did because the goal was to connect these two worlds seamlessly. Among all these experiments, there was one thing in common: I was trying to bring a part of the physical world to the digital world. I was taking some part of the objects, or anything intuitiveness of real life, and bringing them to the digital world, because the goal was to make our computing interfaces more intuitive.
But then I realized that we humans are not actually interested in computing. What we are interested is in information. We want to know about things. We want to know about dynamic things going around.
So I thought, around about last year — in the beginning of the last year — I started thinking that why can I not take this approach in the reverse way? Maybe, how about I take my digital world and paint the physical world with that digital information? Because pixels are actually, right now, confined in these rectangular devices that fit in our pockets. Why can I not remove this confine and take that to my everyday objects, everyday life so that I don’t need to learn the new language for interacting with those pixels?
So, in order to realize this dream, actually o thought of putting a big-size projector on my head. I think that’s why this is called a head-mounted projector, isn’t it? I took it very literally, and took my bike helmet, put a little cut over there so that the projector actually fits nicely. So now, what I can do — I can augment the world around me with this digital information.
But later, I realized that I actually wanted to interact with those digital pixels, also. So I put a small camera over there, that acts as a digital eye. Later, we moved to a much better, consumer-oriented pendant version of that, that many of you’re now knowing as the SixthSense device.
But the most interesting thing about this particular technology is that you can carry your digital world with you wherever you go. You can start using any surface, any wall around you, as an interface. The camera is actually tracking all your gestures. Whatever you’re doing with your hands, it’s understanding that gesture.
And, actually, if you see, there are some color markers that in the beginning version we are using over that. You can start painting on any wall. You stop by a wall, and start painting on that wall. But we are not only tracking here one finger. We are giving you the freedom of using all of both of your hands, so you can actually use both of your hands to zoom into or zoom out of a map just by pinching all present. The camera is actually doing — just, getting all the images — is doing the edge recognition and also the color recognition and so many other small algorithms are going on inside. So, technically, it’s a little bit complex, but it gives you an output which is more intuitive to use, in some sense.