Skip to content
Home » When AI Can Fake Reality, Who Can You Trust? – Sam Gregory (Transcript)

When AI Can Fake Reality, Who Can You Trust? – Sam Gregory (Transcript)

Here is the full transcript of Sam Gregory’s talk titled “When AI Can Fake Reality, Who Can You Trust?” at TED conference.

In this TED talk, technologist and human rights advocate Sam Gregory discusses the increasing difficulty in distinguishing between real and AI-generated content, highlighting the advancements in generative AI and deepfakes. He underscores the growing threat these technologies pose, particularly in spreading falsified sexual images and undermining trust in information.

Gregory, who leads the human-rights group WITNESS, details their efforts to combat deepfakes through the “Prepare, Don’t Panic” initiative and a rapid-response task force of media-forensics experts. He shares insights from cases they’ve encountered, emphasizing the challenges experts face in definitively identifying deepfakes. Gregory stresses the need for comprehensive solutions, including better deepfake detection tools, media literacy, and content provenance through techniques like watermarking and cryptographically signed metadata.

He advocates for a responsible AI pipeline involving government oversight, transparency, and accountability. Gregory concludes by emphasizing the importance of taking proactive steps to prevent a future where distinguishing between real and fake becomes increasingly challenging.

Listen to the audio version here:


The Challenge of Distinguishing Real from Fake in the AI Era

It’s getting harder, isn’t it, to spot real from fake, AI-generated from human-generated. With generative AI, along with other advances in deep fakery, it doesn’t take many seconds of your voice, many images of your face, to fake you, and the realism keeps increasing.

I first started working on deepfakes in 2017, when the threat to our trust in information was overhyped, and the big harm, in reality, was falsified sexual images. Now that problem keeps growing, harming women and girls worldwide.

But also, with advances in generative AI, we’re now also approaching a world where it’s broadly easier to make fake reality, but also to dismiss reality as possibly faked. Now, deceptive and malicious audiovisual AI is not the root of our societal problems, but it’s likely to contribute to them. Audio clones are proliferating in a range of electoral contexts. “Is it, isn’t it” claims cloud human-rights evidence from war zones, sexual deepfakes target women in public and in private, and synthetic avatars impersonate news anchors.

Pages: First |1 | ... | Next → | Last | View Full Transcript