Andrew Ng: Artificial Intelligence is the New Electricity at Stanford GSB (Transcript)

So, some of the product managers I was working with were struggling to understand what can AI do and what can’t AI do. So I’m curious: How many of you know what a product manager is or what a product manager does? Okay good, like half of you. Is that right? Okay, cool. I asked the same question at an academic AI conference and I think only about one fifth of the hands went up, which is interesting.

Just to summarize in the workflow, a lot of tech companies, it’s the product manager’s responsibility to work with users, look at data, to figure out what is a product that users desire, to design the features and sometimes also the marketing and the pricing, as well. But let me just say design the features and figure out what the product is supposed to do, for example, should you have a light button or not? Do you try to have a speech recognition feature or not? So it’s really to design the product. If you give the product spec to engineering which is responsible for building it, right, that’s a common division of labor in technology companies between product managers and engineers.

So product managers, when I was working with them, was trying to understand what can AI do? So there’s this rule of thumb that I gave many product managers, which is that anything that a typical human can do with, at most, one second of thought, right, we can probably now or soon, automate with AI. And this is an imperfect rule. There are false positives and false negatives with these heuristics so this rule is imperfect but we found this rule to be quite helpful.

So today, actually at Baidu, we have some product managers running around looking for tasks that they could do in less than a second and thinking about how to automate that.

I have to say, before we came up with this rule, they were given a different rule by someone else. And before I gave this heuristic, someone else told them product managers, assume AI can do anything. And that actually turned out to be useful. Some progress was made with that heuristic, but I think this one was a bit better.

ALSO READ:   Dr. Spenta Wadia: String Theory and the Hidden Structure of Space-Time at TEDxStXaviersMumbai

A lot of these things on the left you could do with less than a second of thought. So one of the patterns we see is that there are a lot of things that AI can do, but AI progress tends to be fastest if you’re trying to do something that a human can do. For example, build a self-driving car, right? Humans can drive pretty well, so AI is making actually pretty decent progress on that. Or diagnose medical images. If a human radiologist can read an image. The odds of AI being able to do that in the next several years is actually pretty good.

There are some examples of tasks that humans cannot do. For example, I don’t think, well, very few humans can predict how the stock market will change, right? Possibly no human can. And so it’s much harder to get an AI to do that as well. And there are a few reasons for that. First is that if a human can do it, then first, you’re at least guaranteed that it’s feasible, right? Even if a human can’t do it, like predict the stock market, maybe it’s just impossible, I don’t know.

A second reason is that if a human can do it, you could usually get data out of humans. So we have doctors that are pretty good at reading radiological images. And so if A is an image and B is a diagnosis, then you can get these doctors to give you a lot of data, give you a lot of examples of both A and B, right? So things that humans can do, can usually pay people, hire people or something, and get them to provide a lot of data most of the time.

And then finally, if a human can do it, you could use human insight to drive a lot of progress. So if a AI makes a mistake diagnosing a certain radiology image, like an x-ray scan, like an x-ray image, then AI makes a mistake. Then if a human can diagnose this type of disease, you can usually talk to the human and get some insights about why they think this patient has lung cancer or whatever and try to code into an AI. So one of the patterns you see across the AI industry is that progress tends to be faster when we try to automate tasks that humans can do.

ALSO READ:   Consciousness is a Mathematical Pattern: Max Tegmark at TEDxCambridge 2014 (Full Transcript)

And there are definitely many exceptions, but I see so many dozens of AI projects and I’m trying to summarize trends I see. They’re all not 100% true, but 80% or 90% true. So for a lot of projects, you find it if the horizontal axis is time and this is human performance, in terms of how accurately you can diagnose x-ray scans or how accurately can classify spam email or whatever. You find that over time the AI will tend to make rapid progress until you get up to human level performance. And if you ever surpass it, very often your progress slows down because of these reasons.

And so this is great, because this gives AI a lot of space to automate a lot of things. The downside to this is the jobs implication, right. If we’re especially good at doing whatever humans can do, then I think AI software will be in direct competition with a lot of people for a lot of jobs. I would say probably already a little bit now, but even more so in the future. And I’ll say a little about that later as well.

Pages: First | ← Previous | ... | 2 |3 | 4 | ... | Next → | Last | Single Page View

Scroll to Top