Devices That Adapt and Build Smart Environments: Sean Follmer at TEDxCERN (Transcript)

We’ve evolved with tools and tools have evolved with us. Our ancestors created these hand axes 15 million years ago, shaping them to not only fit the task at hand, but also their hand. However, over the years, tools have become more and more specialized. These sculpting tools have evolved through their use, and each one has a different form which matches its function, and they leverage the dexterity of our hands in order to manipulate things with much more precision.

But as tools have become more and more complex, we need more complex controls to control them. And so designers have become very adept at creating interfaces that allow you to manipulate parameters while you’re attending to other things, such as taking a photograph and changing the focus or the aperture.

But the computer has fundamentally changed the way we think about tools, because computation is dynamic. So it can do a million different things and run a million different applications. However, computers have the same static physical form for all of these different applications, and the same static interface elements as well. And I believe that this is fundamentally a problem, because it doesn’t really allow us to interact with our hands and capture the rich dexterity that we have in our bodies. And my belief is that, then, we must need new types of interfaces that can capture these rich abilities that we have, and that can physically adapt to us and allow us to interact in new ways.

And so that’s what I’ve been doing at the MIT Media Lab and now at Stanford. So with my colleagues, Daniel Leithinger and Hiroshi Ishii, we created inFORM, where the interface can actually come off the screen and you can physically manipulate it. Or you can visualize 3D information physically and touch it and feel it to understand it in new ways. Or you can interact through gestures and direct deformations to sculpt digital clay. Or interface elements can arise out of the surface and change on demand.

And the idea is that for each individual application, the physical form can be matched to the application. And I believe this represents a new way that we can interact with information, by making it physical. So the question is, how can we use this? Traditionally, urban planners and architects build physical models of cities and buildings to better understand them. So with Tony Tang at the media lab, we created an interface built on inFORM to allow urban planners to design and view entire cities. And now you can walk around it, but it’s dynamic, it’s physical, and you can also interact directly.

ALSO READ:   Jeffrey Jensen Arnett: Why Does It Take so Long to Grow Up Today? at TEDxPSU (Transcript)

Or you can look at different views, such as population or traffic information, but it’s made physical. We also believe that these dynamic shape displays can really change the ways that we remotely collaborate with people. So when we’re working together in person, I’m not only looking at your face, but I’m also gesturing and manipulating objects, and that’s really hard to do when you’re using tools like Skype. And so using inFORM, you can really literally reach out from the screen and manipulate things at a distance. So we used the pins of the display to represent people’s hands, allowing them to actually touch and manipulate objects at a distance.

And you can also manipulate and collaborate on 3D data sets as well, so you can gesture around them as well as manipulate them. And that allows people to collaborate on these new types of 3D information in a richer way than might be possible with traditional tools. And so you can also bring in existing objects, and those will be captured on one side and transmitted to the other. Or you can have an object that’s linked between two places, so as I move a ball on one side, the ball moves on the other as well. And so we do this by capturing the remote user using a depth-sensing camera like a Microsoft Kinect.

Now, you might be wondering how does this all work, and essentially, what it is, is 900 linear actuators that are connected to these mechanical linkages that allow motion down here to be propagated in these pins above. So it’s not that complex compared to what’s going on at CERN, but it did take a long time for us to build it – we actually had to build it – and so we started with a single motor, a single linear actuator, and then we had to design a custom circuit border to control them. And then we had to make a lot of them. And so the problem with having 900 of something is that you have to do every step 900 times. And so that meant that we had a lot of work to do.

ALSO READ:   The Case for Basic Income: Sebastian Johnson at TEDxMidAtlantic (Transcript)

So we sort of set up a mini-sweatshop in the media lab and brought undergrads in and convinced them to do “research” — and had late nights watching movies, eating pizza, and screwing in thousands of screws. You know — research. But anyway, I think that we were really excited by the things that inFORM allowed us to do. Increasingly, we’re using mobile devices and we interact on the go, but mobile devices, just like computers, are used for so many different applications. So you use them to talk on the phone, to surf the web, to play games, to take pictures, or even a million different things.

But again, they have the same static physical form for each of these applications. And so we wanted to know how can we take some of the same interactions that we developed for inFORM and bring them to mobile devices. So at Stanford, we created this haptic edge display, which is a mobile device with an array of linear actuators that can change shape, so you can feel in your hand where you are as you’re reading a book. Or you can feel in your pocket new types of tactile sensations that are richer than the vibration. Or buttons can emerge from the side that allow you to interact where you want them to be.

Pages: 1 | 2 | Single Page View

Scroll to Top