The Danger of AI is Weirder Than You Think: Janelle Shane (Transcript)

[Sindis Poop, Turdly, Suffer, Gray Pubic]

So technically, it did what I asked it to. I thought I was asking it for, like, nice paint color names, but what I was actually asking it to do was just imitate the kinds of letter combinations that it had seen in the original. And I didn’t tell it anything about what words mean, or that there are maybe some words that it should avoid using in these paint colors. So its entire world is the data that I gave it. Like with the ice cream flavors, it doesn’t know about anything else.

So it is through the data that we often accidentally tell AI to do the wrong thing. This is a fish called a tench. And there was a group of researchers who trained an AI to identify this tench in pictures.

But then when they asked it what part of the picture it was actually using to identify the fish, here’s what it highlighted. Yes, those are human fingers.

Why would it be looking for human fingers if it’s trying to identify a fish? Well, it turns out that the tench is a trophy fish, and so in a lot of pictures that the AI had seen of this fish during training, the fish looked like this.

And it didn’t know that the fingers aren’t part of the fish.

So you see why it is so hard to design an AI that actually can understand what it’s looking at. And this is why designing the image recognition in self-driving cars is so hard, and why so many self-driving car failures are because the AI got confused. I want to talk about an example from 2016.

There was a fatal accident when somebody was using Tesla’s autopilot AI, but instead of using it on the highway like it was designed for, they used it on city streets.

ALSO READ:   Full Transcript: Mark Zuckerberg at Facebook's F8 2018 Developer Conference

And what happened was, a truck drove out in front of the car and the car failed to brake. Now, the AI definitely was trained to recognize trucks in pictures. But what it looks like happened is the AI was trained to recognize trucks on highway driving, where you would expect to see trucks from behind. Trucks on the side is not supposed to happen on a highway, and so when the AI saw this truck, it looks like the AI recognized it as most likely to be a road sign and therefore, safe to drive underneath.

Here’s an AI misstep from a different field. Amazon recently had to give up on a résumé-sorting algorithm that they were working on when they discovered that the algorithm had learned to discriminate against women.

What happened is they had trained it on example résumés of people who they had hired in the past. And from these examples, the AI learned to avoid the résumés of people who had gone to women’s colleges or who had the word “women” somewhere in their resume, as in, “women’s soccer team” or “Society of Women Engineers.”

The AI didn’t know that it wasn’t supposed to copy this particular thing that it had seen the humans do. And technically, it did what they asked it to do. They just accidentally asked it to do the wrong thing.

And this happens all the time with AI. AI can be really destructive and not know it. So the AIs that recommend new content in Facebook, in YouTube, they’re optimized to increase the number of clicks and views. And unfortunately, one way that they have found of doing this is to recommend the content of conspiracy theories or bigotry. The AIs themselves don’t have any concept of what this content actually is, and they don’t have any concept of what the consequences might be of recommending this content.

ALSO READ:   Bailey Parnell: Is Social Media Hurting Your Mental Health? (Transcript)

So, when we’re working with AI, it’s up to us to avoid problems. And avoiding things going wrong, that may come down to the age-old problem of communication, where we as humans have to learn how to communicate with AI. We have to learn what AI is capable of doing and what it’s not, and to understand that, with its tiny little worm brain, AI doesn’t really understand what we’re trying to ask it to do.

So in other words, we have to be prepared to work with AI that’s not the super-competent, all-knowing AI of science fiction. We have to prepared to work with an AI that’s the one that we actually have in the present day. And present-day AI is plenty weird enough.

Thank you.

 

Resources for Further Reading:

Luca Longo: The Turing Test, Artificial Intelligence and the Human Stupidity (Transcript)

Andrew Ng: Artificial Intelligence is the New Electricity at Stanford GSB (Transcript)

Transcript: Rand Hindi on How Artificial Intelligence Will Make Technology Disappear at TEDxÉcolePolytechnique

The Promise and Peril of Our Quantum Future: Craig Costello (Transcript)

Pages: 1 | 2 | Single Page View

Scroll to Top