Skip to content
Home » How To Spot Fake AI Photos: Hany Farid (Transcript)

How To Spot Fake AI Photos: Hany Farid (Transcript)

Read the full transcript of digital forensics expert Hany Farid’s talk titled “How To Spot Fake AI Photos”, recorded at TED2025 on April 10, 2025.

Listen to the audio version here:

HANY FARID: You are a senior military officer, and you’ve just received a chilling message on social media. Four of your soldiers have been taken, and if demands are not met in the next ten minutes, they will be executed. All you have to go on is this grainy photo, and you don’t have the time to figure out if four of your soldiers are in fact missing. What’s your first move?

If I may be so bold, your first move is to contact somebody like me and my team. I am by training an applied mathematician and computer scientist, and I know that seems like a very strange first call at a moment like this, but I’ve spent the last 30 years developing technologies to analyze and authenticate digital images and digital videos.

Along the way, we’ve worked with journalists, we’ve worked with courts, we’ve worked with governments on a range of cases, from a damning photo of a cheating spouse, gut-wrenching images of child abuse, photographic evidence in a capital murder case, and of course things that we just can’t talk about.

The Escalating Crisis

It used to be a case would come across my desk once a month, and then it was once a week. Now, it’s almost every day. And the reason for this escalation is a combination of things. One, generative AI. We now have the ability to create images that are almost indistinguishable from reality. Two, social media dominates the world and is largely unregulated and actively promotes and amplifies lies and conspiracies over the truth.

And collectively this means that it’s becoming harder and harder to believe anything that we read, see, or hear online. I contend that we are in a global war for truth, with profound consequences for individuals, for institutions, for societies, and for democracies. And I’d like to spend a little time talking today about what my team and I are doing to try to return some of that trust to our online world and in turn our offline world.

The Evolution of Photo Manipulation

For 200 years, it seemed reasonable to trust photographs. But even in the mid-1800s, it turns out the Victorians had a sense of humor. They manipulated images. Or you could alter history. If you fell out of favor with Stalin, for example, you may be airbrushed out of the history books.

But then, in the turn of the millennium, with the rise of digital cameras and photo editing software, it became easier and easier to manipulate reality. And now, with generative AI, anybody can create any image of anything, anywhere, at the touch of a button. One, four soldiers tied up in a basement. Two, a giraffe trying on a turtleneck sweater.

ALSO READ:  Transcript of Jeffrey Sachs on China Economy, US Trump Tariffs, African Aid

It’s not fun and games, of course, because generative AI is being used to supercharge past threats and create entirely new ones. The creation of nudes of real women and children used to humiliate or extort them. Fake videos of doctors promoting bogus cures for serious illnesses. A Fortune 500 company losing tens of millions of dollars because an AI impersonator of their CEO infiltrated a video call. Those threats are real, they are here, and we are all vulnerable.

How Generative AI Works

Before we talk about how we would analyze this image to determine if it’s real or not, it’s useful to understand how generative AI works. Starting with billions of images, with a descriptive caption like this, each image is degraded until nothing but visual noise is left, a random array of pixels. And then, the AI model learns how to reverse that process by essentially turning that noise back into the original image.

And when this process is done not once, not twice, but billions of times on a diverse set of images, the machine has learned how to convert noise into an image that is semantically consistent with anything you type. And it’s incredible. But it is decidedly not how a natural photograph is taken, which is the result of converting light that strikes an electronic sensor into a digital representation.

Detection Technique 1: Noise Analysis

And so one of the first things we like to look at is whether the residual noise in an image looks more like a natural image or an AI-generated image. Here for example is our real dog and our AI dog. And here is the residual noise that I’ve extracted. And if you look at this, it’s not at all obvious that there’s any difference between those two patterns.

But here in this visualization of the noise, you can see a decidedly different pattern between the natural and the artificial. Those star-like patterns are a telltale sign of generative AI. Now the mathematicians and the physicists in the audience, that is the magnitude of the Fourier transform of the noise residual. For everybody else, that detail doesn’t matter, but you definitely should have taken more math in college. Professors can’t help themselves.

So let’s apply this analysis to this image. Here’s the noise residual that I’ve extracted, and there’s that star-like pattern that you see in the bottom right. Our first suggestion that something may be wrong here. But no forensic technique is perfect, and so you don’t stop after one thing. You keep going.

Detection Technique 2: Vanishing Points

So let’s go on to our next one, the vanishing points. If you image parallel lines in the physical world, they will converge to a single point, what’s called the vanishing point. Good intuition for that is the railroad tracks. When I took this photo, the railroad tracks are obviously parallel, but you can see that they narrow and recede as they recede away from me and intersect at a single vanishing point. This is a phenomenon that artists have known for centuries.

ALSO READ:  How to Relieve the Stress of Caring for an Aging Parent: Amy O'Rourke (Transcript)

But here’s the great thing, AI doesn’t know this.