Skip to content
Home » When AI Can Fake Reality, Who Can You Trust? – Sam Gregory (Transcript)

When AI Can Fake Reality, Who Can You Trust? – Sam Gregory (Transcript)

Here is the full transcript of Sam Gregory’s talk titled “When AI Can Fake Reality, Who Can You Trust?” at TED conference.

In this TED talk, technologist and human rights advocate Sam Gregory discusses the increasing difficulty in distinguishing between real and AI-generated content, highlighting the advancements in generative AI and deepfakes. He underscores the growing threat these technologies pose, particularly in spreading falsified sexual images and undermining trust in information.

Gregory, who leads the human-rights group WITNESS, details their efforts to combat deepfakes through the “Prepare, Don’t Panic” initiative and a rapid-response task force of media-forensics experts. He shares insights from cases they’ve encountered, emphasizing the challenges experts face in definitively identifying deepfakes. Gregory stresses the need for comprehensive solutions, including better deepfake detection tools, media literacy, and content provenance through techniques like watermarking and cryptographically signed metadata.

He advocates for a responsible AI pipeline involving government oversight, transparency, and accountability. Gregory concludes by emphasizing the importance of taking proactive steps to prevent a future where distinguishing between real and fake becomes increasingly challenging.

Listen to the audio version here:

TRANSCRIPT:

The Challenge of Distinguishing Real from Fake in the AI Era

It’s getting harder, isn’t it, to spot real from fake, AI-generated from human-generated. With generative AI, along with other advances in deep fakery, it doesn’t take many seconds of your voice, many images of your face, to fake you, and the realism keeps increasing.

I first started working on deepfakes in 2017, when the threat to our trust in information was overhyped, and the big harm, in reality, was falsified sexual images. Now that problem keeps growing, harming women and girls worldwide.

But also, with advances in generative AI, we’re now also approaching a world where it’s broadly easier to make fake reality, but also to dismiss reality as possibly faked. Now, deceptive and malicious audiovisual AI is not the root of our societal problems, but it’s likely to contribute to them. Audio clones are proliferating in a range of electoral contexts. “Is it, isn’t it” claims cloud human-rights evidence from war zones, sexual deepfakes target women in public and in private, and synthetic avatars impersonate news anchors.

The Role of WITNESS in Combatting Deepfakes

I lead WITNESS. We’re a human-rights group that helps people use video and technology to protect and defend their rights. And for the last five years, we’ve coordinated a global effort, “Prepare, Don’t Panic,” around these new ways to manipulate and synthesize reality, and on how to fortify the truth of critical frontline journalists and human-rights defenders.

Now, one element in that is a deepfakes rapid-response task force, made up of media-forensics experts and companies who donate their time and skills to debunk deepfakes and claims of deepfakes. The task force recently received three audio clips, from Sudan, West Africa, and India. People were claiming that the clips were deepfaked, not real.

In the Sudan case, experts used a machine-learning algorithm trained on over a million examples of synthetic speech to prove, almost without a shadow of a doubt, that it was authentic. In the West Africa case, they couldn’t reach a definitive conclusion because of the challenges of analyzing audio from Twitter, and with background noise.

Challenges in Deepfake Detection

The third clip was leaked audio of a politician from India. Nilesh Christopher of “Rest of World” brought the case to the task force. The experts used almost an hour of samples to develop a personalized model of the politician’s authentic voice. Despite his loud and fast claims that it was all falsified with AI, experts concluded that it at least was partially real, not AI.

ALSO READ:  AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference (Transcript)

As you can see, even experts cannot rapidly and conclusively separate true from false, and the ease of calling “that’s deepfaked” on something real is increasing. The future is full of profound challenges, both in protecting the real and detecting the fake.

We’re already seeing the warning signs of this challenge of discerning fact from fiction. Audio and video deepfakes have targeted politicians, major political leaders in the EU, Turkey, and Mexico, and US mayoral candidates. Political ads are incorporating footage of events that never happened, and people are sharing AI-generated imagery from crisis zones, claiming it to be real.

Now, again, this problem is not entirely new. The human-rights defenders and journalists I work with are used to having their stories dismissed, and they’re used to widespread, deceptive, shallow fakes, videos and images taken from one context or time or place and claimed as if they’re in another, used to share confusion and spread disinformation.

The Impact on Society and the Need for Solutions

And of course, we live in a world that is full of partisanship and plentiful confirmation bias. Given all that, the last thing we need is a diminishing baseline of the shared, trustworthy information upon which democracies thrive, where the specter of AI is used to plausibly believe things you want to believe, and plausibly deny things you want to ignore.

But I think there’s a way we can prevent that future, if we act now; that if we “Prepare, Don’t Panic,” we’ll kind of make our way through this somehow. Panic won’t serve us well. It plays into the hands of governments and corporations who will abuse our fears, and into the hands of people who want a fog of confusion and will use AI as an excuse.

How many people were taken in, just for a minute, by the Pope in his dripped-out puffer jacket? You can admit it. More seriously, how many of you know someone who’s been scammed by an audio that sounds like their kid? And for those of you who are thinking “I wasn’t taken in, I know how to spot a deepfake,” any tip you know now is already outdated. Deepfakes didn’t blink, they do now. Six-fingered hands were more common in deepfake land than real life — not so much. Technical advances erase those visible and audible clues that we so desperately want to hang on to as proof we can discern real from fake.

The Necessity for Structural Solutions in Deepfake Detection

But it also really shouldn’t be on us to make that guess without any help.