Skip to content
Home » Why Don’t We Have Better Robots Yet? – Ken Goldberg (Transcript) 

Why Don’t We Have Better Robots Yet? – Ken Goldberg (Transcript) 

Here is the full transcript of roboticist Ken Goldberg’s talk titled “Why Don’t We Have Better Robots Yet?” at TED conference.

Listen to the audio version here:

TRANSCRIPT:

The Promise of Home Robots

I have a feeling most people in this room would like to have a robot at home. It’d be nice to be able to do the chores and take care of things. Where are these robots? What’s taking so long? I mean, we have our tricorders, and we have satellites. We have laser beams. But where are the robots? I mean, OK, wait, we do have some robots in our home, but not really doing anything that exciting, OK?

Now I’ve been doing research at UC Berkeley for 30 years with my students on robots, and in the next 10 minutes, I’m going to try to explain the gap between fiction and reality. Now we’ve seen images like this, right? These are real robots. They’re pretty amazing. But those of us who work in the field, well, the reality is more like this. That’s 99 out of 100 times, that’s what happens. And in the field, there’s something that explains this that we call Moravec’s paradox.

Moravec’s Paradox and Robot Clumsiness

And that is, what’s easy for robots, like being able to pick up a large object — large heavy object is hard for humans. But what’s easy for humans, like being able to pick up some blocks and stack them, well, it turns out that is very hard for robots. And this is a persistent problem. So the ability to grasp arbitrary objects is a grand challenge for my field.

Now by the way, I was a very klutzy kid. I would drop things. Any time someone would throw me a ball, I would drop it. I was the last kid to get picked on a basketball team. I’m still pretty klutzy, actually, but I have spent my entire career studying how to make robots less clumsy.

Innovations in Robot Hardware

Now let’s start with the hardware. So the hands. Now this is a robot hand, a particular type of hand. It’s a lot like our hand. And it has a lot of motors, a lot of tendons, and cables as you can see. So it’s unfortunately not very reliable. It’s also very heavy and very expensive. So I’m in favor of very simple hands, like this. So this has just two fingers. It’s known as a parallel jaw gripper. So it’s very simple. It’s lightweight, reliable, and it’s very inexpensive.

And if you’re doubting that simple hands can be effective, look at this video where you can see that two very simple grippers, these are being operated, by the way, by humans who are controlling the grippers like a puppet. But very simple grippers are capable of doing very complex things. Now actually in industry, there’s even a simpler robot gripper, and that’s the suction cup. And that only makes a single point of contact. So again, simplicity is very helpful in our field.

The Complex World of Robot Software

Now let’s talk about the software. This is where it gets really, really difficult because of a fundamental issue, which is uncertainty. There’s uncertainty in the control. There’s uncertainty in the perception. And there’s uncertainty in the physics. Now what do I mean by the control? Well if you look at a robot’s gripper trying to do something, there’s a lot of uncertainty in the cables and the mechanisms that cause very small errors. And these can accumulate and make it very difficult to manipulate things.

ALSO READ:  Kriti Sharma: Can Robots Create a Fairer World? (Transcript)

Now in terms of the sensors, yes, robots have very high-resolution cameras just like we do, and that allows them to take images of scenes in traffic, or in a retirement center, or in a warehouse or in an operating room. But these don’t give you the three-dimensional structure of what’s going on. So recently, there was a new development called LIDAR, and this is a new class of cameras that use light beams to build up a three-dimensional model of the environment. And these are fairly effective. They really were a breakthrough in our field, but they’re not perfect.

So if the objects have anything that’s shiny or transparent, well, then the light acts in unpredictable ways, and it ends up with noise and holes in the images. So these aren’t really the silver bullet. And there’s one other form of sensor out there now called a “tactile sensor.” And these are very interesting. They use cameras to actually image the surfaces as a robot would make contact, but these are still in their infancy.

Now the last issue is the physics. And let me illustrate for you by showing you, we take a bottle on a table and we just push it, and the robot’s pushing it in exactly the same way each time. But you can see that the bottle ends up in a very different place each time. And why is that? Well, it’s because it depends on the microscopic surface topography underneath the bottle as it slid. For example, if you put a grain of sand under there, it would react very differently than if there weren’t a grain of sand.

And we can’t see if there’s a grain of sand because it’s under the bottle. It turns out that we can predict the motion of an asteroid a million miles away, far better than we can predict the motion of an object as it’s being grasped by a robot.

Now let me give you an example. Put yourself here into the position of being a robot. You’re trying to clear the table and your sensors are noisy and imprecise. Your actuators, your cables and motors are uncertain, so you can’t fully control your own gripper. And there’s uncertainty in the physics, so you really don’t know what’s going to happen. So it’s not surprising that robots are still very clumsy.