Peter Haas: The Real Reason to be Afraid of Artificial Intelligence (Full Transcript)

Peter Haas is a professor of Political Science at the University of Massachusetts Amherst. His research concerns epistemic communities, global environmental politics, multilevel governance, and the role of science in global politics.

Here is the full text of Peter’s talk titled “The Real Reason to be Afraid of Artificial Intelligence” at TEDxDirigo conference.

 

Peter Haas – TEDx Talk TRANSCRIPT

The rise of the machines! Who here is scared of killer robots? I am!

I used to work in UAVs – Unmanned Aerial Vehicles – and all I could think seeing these things is that someday, somebody is going to strap a machine-gun to these things, and they’re going to hunt me down in swarms.

I work in robotics at Brown University and I’m scared of robots. Actually, I’m kind of terrified, but, can you blame me? Ever since I was a kid, all I’ve seen are movies that portrayed the ascendance of Artificial Intelligence and our inevitable conflict with it – 2001 Space Odyssey, The Terminator, The Matrix – and the stories they tell are pretty scary: rogue bands of humans running away from super intelligent machines. That scares me.

From the hands, it seems like it scares you as well. I know it is scary to Elon Musk. But, you know, we have a little bit of time before the robots rise up.

Robots like the PR2 that I have at my initiative, they can’t even open the door yet. So in my mind, this discussion of super intelligent robots is a little bit of a distraction from something far more insidious that is going on with AI systems across the country.

You see, right now, there are people – doctors, judges, accountants – who are getting information from an AI system and treating it as if it is information from a trusted colleague. It’s this trust that bothers me, not because of how often AI gets it wrong. AI researchers pride themselves in accuracy on results.

It’s how badly it gets it wrong when it makes a mistake that has me worried. These systems do not fail gracefully.

So, let’s take a look at what this looks like. This is a dog that has been misidentified as a wolf by an AI algorithm. The researchers wanted to know: why did this particular husky get misidentified as a wolf? So they rewrote the algorithm to explain to them the parts of the picture it was paying attention to when the AI algorithm made its decision.

In this picture, what do you think it paid attention to? What would you pay attention to? Maybe the eyes, maybe the ears, the snout. This is what it paid attention to: mostly the snow and the background of the picture.

You see, there was bias in the data set that was fed to this algorithm. Most of the pictures of wolves were in snow, so the AI algorithm conflated the presence or absence of snow for the presence or absence of a wolf.

The scary thing about this is the researchers had no idea this was happening until they rewrote the algorithm to explain itself. And that’s the thing with AI algorithms, deep learning, machine learning. Even the developers who work on this stuff have no idea what it’s doing.

So, that might be a great example for a research, but what does this mean in the real world? The COMPAS Criminal Sentencing algorithm is used in 13 states to determine criminal recidivism or the risk of committing a crime again after you’re released.

ProPublica found that if you’re African-American, COMPAS was 77% more likely to qualify you as a potentially violent offender than if you’re a Caucasian. This is a real system being used in the real world by real judges to make decisions about real people’s lives.

Why would the judges trust it if it seems to exhibit bias?

Well, the reason they use COMPAS is because it is a model for efficiency. COMPAS lets them go through caseloads much faster in a backlogged criminal justice system. Why would they question their own software? It’s been requisitioned by the State, approved by their IT Department.

Why would they question it? Well, the people sentenced by COMPAS have questioned it, and their lawsuits should chill us all.

The Wisconsin State Supreme Court ruled that COMPAS did not deny a defendant due process provided it was used “properly.” In the same set of rulings, they ruled that the defendant could not inspect the source code of COMPAS. It has to be used properly but you can’t inspect the source code?

This is a disturbing set of rulings when taken together for anyone facing criminal sentencing. You may not care about this because you’re not facing criminal sentencing, but what if I told you that black box AI algorithms like this are being used to decide whether or not you can get a loan for your house, whether you get a job interview, whether you get Medicaid, and are even driving cars and trucks down the highway.

Would you want the public to be able to inspect the algorithm that’s trying to make a decision between a shopping cart and a baby carriage in a self-driving truck, in the same way the dog/wolf algorithm was trying to decide between a dog or a wolf?

Are you potentially a metaphorical dog who’s been misidentified as a wolf by somebody’s AI algorithm? Considering the complexity of people, it’s possible. Is there anything you can do about it now? Probably not, and that’s what we need to focus on.

Pages: 1 | 2 | Single page view