Skip to content
Home » The Problem With AI-Generated Art: Steven Zapata (Transcript)

The Problem With AI-Generated Art: Steven Zapata (Transcript)

Here is the full transcript and summary of Steven Zapata’s talk titled “The Problem With AI-Generated Art” at TEDxBerkeley conference.

In this TEDx talk, Art Instructor Steven Zapata discusses the potential ethical concerns surrounding the use of AI-generated art. One major concern is the way that AI models are trained off of example images, which are often creative content scraped from the web without consent or compensation given to the creators.

Listen to the audio version here:

TRANSCRIPT:

Imagine we’re a hundred years in the future. Actually no. The way things are going, let’s talk about next year. You go through a difficult life event, maybe the end of a long relationship or a pet dies or a parent dies.

Soon after this event occurs, a series of obscured processes begin. You start to notice changes. You see ads to download Tinder, where before they were ads for engagement rings. The local animal shelter is suddenly in your scroll.

Odd, for sure, but to be expected so far, really. But now some new things start to happen. Your phone offers to show you some impossible images. Very realistic renderings of your dog happy in doggy heaven. But your phone knows your religion. So if you’re Catholic, your dog is happy in Catholic doggy heaven. A company tastefully, tastefully reaches out to ask if you would like to have a phone conversation with your recently deceased mother. In her voice, it will sound just like her.

And they can promise you that she will apologize for that last fight that you had. Many of you will hear these things and they will strike you as gross boundary violations. But surely there are some amongst you who think that some of this sounds genuinely soothing.

Now whatever your appetite for high tech therapy, the point that I want to make is that the technology needed to make this a reality is technology that’s already here. They’re generative AI systems. And the data needed to make this a reality is data that we already give to companies. You all know that.

You know that if you’ve ever uploaded art or photos on the internet or if you’ve ever left comments or written reviews, if you’ve liked anything, really, then you have in a sense done free work for tech companies. And if you think that they are using that data just to serve you better ads, well, maybe once that was the case. But not anymore.

Let me read you the stated mission of OpenAI. They’re the makers of the image generator DALL-E and the chatbot ChatGPT. OpenAI’s mission is to ensure that artificial general intelligence, by which we mean highly autonomous systems that outperform humans at most economically valuable work, benefits all of humanity. We will attempt to directly build safe and beneficial AGI. So their plan is to use your data to replace you for the benefit of you. Cool, yeah.

I wish I could say that this was all still in the realm of the hypothetical, but it has already begun. Just last year, of all things, automated art hit the scene with products like MidJourney, OpenAI’s DALL-E, and Stable Diffusion by Stability AI. These are text to image models.

So they take input from the user in the form of a natural language prompt, like you’re seeing right there, and then there’s a complicated process, and they return an image that matches that prompt. It automates art. It automates art. It’s exactly what you think it is. You type in some words and a picture comes out. Ooh, great. Nice, good. No, that’s good.

You know, I have to admit, I did not see that one coming. I have yet to see any of those long-promised self-driving cars on the road, but the Internet is filled with machine-made art. Do I sound mad? I mean, I am mad. I know it’s not cool to admit that you’re mad, but I’m just going to say I’m mad.

Of course I am, because I’m a visual artist. Lucky me. I’ve been a professional designer and illustrator for over a decade. I love drawing. I don’t know how to make that as clear as possible. I really love it. It has given me so many gifts. Discipline, perseverance, self-awareness, compassion.

It’s helped me get through grief and loss. And art-making can do that, because art-making is a vehicle for self-transformation. And it allows you, amazingly, to share that transformation with others. But it’s not the end product that transforms the artist. It’s the act of doing it. It’s the act of making the art.

ALSO READ:  Jonathan Roumie's 2024 Commencement Speech at CUA (Transcript)

In the age of automated art, we might still have art, but we stand to lose everything that there is to gain in the making. Now, these AI companies are going to say that their offerings are all of this. Ethical, compliance, fair. That’s all branding. They can just say those things. And the experience of artists with these systems so far is anything but that.

Consider the way that these models are currently trained off of our content. A text-to-image model needs to be trained off of millions to billions of example images. Now, these images are every kind of thing. Some of it is art. Some of it is photos. Some of it is innocuous. Some of it is more troubling. Much of it is people’s copyrighted creative content.

And all of it has been scraped from the web without consent, credit, or compensation being given to the creators, which is, that’s really something. Because it’s the quality of the training data that determines the quality of the model. The better the creative inputs, the better the creative outputs.

Now, usually, this is where I would go into some of the details about this training process, but I’m not going to foolishly attempt that in front of a room that probably has a lot of machine learning specialists in it.

Fortunately, the task of determining what the precise nature of this training is and whether it is legal, ethical, fair use, what have you, has recently fallen to the professionals.