Discussion: The A.I. Dilemma – March 9, 2023 (Transcript)

In this discussion, Tristan Harris and Aza Raskin present how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world. This presentation was given before the launch of GPT-4.

Listen to the MP3 Audio here:

TRANSCRIPT:

Steve Wozniak – Apple

Hello, Steve Wozniak from Apple. I’m here to introduce Tristan Harris and Aza Raskin and they’re the co-founders of the Center for Humane Technology. They were behind the Emmy-winning Netflix documentary, The Social Dilemma. The Social Dilemma reached 100 million people in 190 countries in 30 languages and they’ve also advised the heads of state, global policy makers, members of Congress, national security leaders, in addition to mobilizing the millions of us about these issues and some of the dangers that we face with technology these days.

So here they are.

AZA RASKIN: The reason why we started with that video is, one, it’s the first time I’ve seen AI that made me feel something and there was a threshold that we crossed; and the second is it was a very curious experience that we had trying to explain to reporters what was going on.

So this was January of last year, at that point there were maybe 100 people playing with this new technology. Now there are 10 million people having generated over a billion images and trying to explain to reporters what was about to happen and we’d walk them through how the technology worked and that you would type in some text and it would make an image that had never been seen before and they would nod along and at the end they’d be like, cool, and what was the image database you got your images from?

THE RUBBER BAND EFFECT

And it was just clear that we’d like stretched their mind like a rubber band and then because this was a brand new capability, a brand new paradigm, their minds would snap back, and it’s not like dumb reporters, it’s like a thing that we all experience and even in making this presentation so many times realizing we have to expand our minds, and then we look somewhere else and it snaps back and we just wanted to name that experience, because if you’re anything like us that’ll happen to your minds throughout this presentation, especially at the end when you go home you’d be like, wait, what did we just see?

ALSO READ:   A BIG Idea, a Bot Idea: How Smart Policy Will Advance Tech by Albert Wenger at TEDxNewYork (Transcript)

TRISTAN HARRIS: And I think because artificial intelligence is such an abstract thing and it affects so many things and doesn’t have the grounding metaphors like the kinesthetic experience in our lives that it’s so hard to kind of wrap your head around how transformational this is. So when we call the presentation a paradigmatic response to a paradigmatic technology, what we really want to do is arm all of you with maybe a more visceral way of experiencing the exponential curves that we’re about to be heading into.

PREFACE: WHAT DOES RESPONSIBLE ROLLOUT LOOK LIKE?

AZA RASKIN: Just to name a little bit of where the come from is because we’re going to say a lot of things about AI that are not going to be super positive and yet you know since 2017 I’ve been working on a thing called the Earth Species Project using AI to translate animal communication, decoding on human language so there’s a huge part of this stuff that I really love and believe in.

A couple weeks ago I made a Spanish tutor for myself with ChatGPT in like 15 minutes so we’re not saying, it’s great, it’s better than Duolingo for like 45 minutes. So what we’re not saying is that there aren’t incredible positives that are coming out of this, that’s not what we’re saying.

OPPENHEIMER MANHATTAN PROJECT ANALOGY

TRISTAN HARRIS: Yeah, what we are saying is — are the ways that we’re now releasing these new large language model AIs into the public, are we doing that responsibly? And what we’re hearing from people is that we’re not doing responsibly.

The feeling that I’ve had personally just to share is like it’s 1944 and you get a call from Robert Oppenheimer inside this thing called the Manhattan Project, you have no idea what that is. And he says, “The world is about to change in a fundamental way, except the way it’s about to change it’s not being deployed in a safe and responsible way, it’s being deployed in a very dangerous way. And will you help from the outside?”

And when I say Oppenheimer I mean more of a metaphor of a large number of people who are concerned about this, and some of them might be in this room, people who are in the industry and we wanted to figure out what does responsibility look like? Now why would we say that?

ALSO READ:   The Truth About Bitcoin

SURVEY RESULTS ON THE PROBABILITY OF HUMAN EXTINCTION

Because this is a stat that took me by surprise. 50% of AI researchers believe there’s a 10% or greater chance that humans go extinct from our inability to control AI. Say that one more time.

Half of AI researchers believe there’s a 10% or greater chance from humans’ inability to control AI. That would be like if you’re about to get on a plane and 50% of the engineers who make the plane say, well, if you get on this plane, there’s a 10% chance that everybody goes down. Would you get on that plane?

3 RULES OF TECHNOLOGY

Well we are rapidly onboarding people onto this plane because of some of the dynamics that we’re going to talk about, because sort of three rules of technology that we want to quickly go through with you that relate to what we’re going to talk about.

NEW TECH, A NEW CLASS OF RESPONSIBILITIES

AZA RASKIN: This just names the structure of the problem. So first, when you invent a new technology, you uncover a new class of responsibility. And it’s not always obvious what those responsibilities are. So to give two examples. We didn’t need the right to be forgotten, to be written into law until computers could remember us forever. It’s not at all obvious that cheap storage would mean we’d have to invent new law.

Or we didn’t need the right to privacy to be written into law until mass produced cameras came onto the market. Brandeis had to, essentially from scratch, invent the right to privacy. That’s not in the original constitution.

And of course to fast forward just a little bit, the attention economy, we are still in the process of figuring out how to write into law that which the attention economy and the engagement economy takes from us. So when you invent a new technology, you uncover a new class of responsibility.

ALSO READ:   How to be "Team Human" in the Digital Future: Douglas Rushkoff (Transcript)

IF A TECH CONFERS POWER, IT STARTS RACE

And then two, if that technology confers power, it will start a race. And if you do not coordinate, the race will end in tragedy. There’s no one single player that can stop the race that ends in tragedy. And that’s really what the social dilemma was about.

TRISTAN HARRIS: And I would say that social dilemma and social media was actually humanity’s first first contact moment between humanity and AI. I’m curious if that makes sense to you because, when you open up TikTok and you scroll your finger, you just activated the supercomputer, the AI pointed at your brain to calculate and predict with increasing accuracy the perfect thing that will keep you scrolling.

Pages: First |1 | ... | | Last | View Full Transcript

Scroll to Top