Here is the full transcript and summary of Steven Zapata’s talk titled “The Problem With AI-Generated Art” at TEDxBerkeley conference.
In this TEDx talk, Art Instructor Steven Zapata discusses the potential ethical concerns surrounding the use of AI-generated art. One major concern is the way that AI models are trained off of example images, which are often creative content scraped from the web without consent or compensation given to the creators.
Listen to the audio version here:
TRANSCRIPT:
Imagine we’re a hundred years in the future. Actually no. The way things are going, let’s talk about next year. You go through a difficult life event, maybe the end of a long relationship or a pet dies or a parent dies.
Soon after this event occurs, a series of obscured processes begin. You start to notice changes. You see ads to download Tinder, where before they were ads for engagement rings. The local animal shelter is suddenly in your scroll.
Odd, for sure, but to be expected so far, really. But now some new things start to happen. Your phone offers to show you some impossible images. Very realistic renderings of your dog happy in doggy heaven. But your phone knows your religion. So if you’re Catholic, your dog is happy in Catholic doggy heaven. A company tastefully, tastefully reaches out to ask if you would like to have a phone conversation with your recently deceased mother. In her voice, it will sound just like her.
And they can promise you that she will apologize for that last fight that you had. Many of you will hear these things and they will strike you as gross boundary violations. But surely there are some amongst you who think that some of this sounds genuinely soothing.
Now whatever your appetite for high tech therapy, the point that I want to make is that the technology needed to make this a reality is technology that’s already here.
You know that if you’ve ever uploaded art or photos on the internet or if you’ve ever left comments or written reviews, if you’ve liked anything, really, then you have in a sense done free work for tech companies. And if you think that they are using that data just to serve you better ads, well, maybe once that was the case. But not anymore.
Let me read you the stated mission of OpenAI. They’re the makers of the image generator DALL-E and the chatbot ChatGPT. OpenAI’s mission is to ensure that artificial general intelligence, by which we mean highly autonomous systems that outperform humans at most economically valuable work, benefits all of humanity. We will attempt to directly build safe and beneficial AGI. So their plan is to use your data to replace you for the benefit of you. Cool, yeah.
I wish I could say that this was all still in the realm of the hypothetical, but it has already begun. Just last year, of all things, automated art hit the scene with products like MidJourney, OpenAI’s DALL-E, and Stable Diffusion by Stability AI. These are text to image models.
So they take input from the user in the form of a natural language prompt, like you’re seeing right there, and then there’s a complicated process, and they return an image that matches that prompt. It automates art. It automates art. It’s exactly what you think it is. You type in some words and a picture comes out. Ooh, great. Nice, good. No, that’s good.
You know, I have to admit, I did not see that one coming. I have yet to see any of those long-promised self-driving cars on the road, but the Internet is filled with machine-made art. Do I sound mad? I mean, I am mad. I know it’s not cool to admit that you’re mad, but I’m just going to say I’m mad.
Of course I am, because I’m a visual artist. Lucky me. I’ve been a professional designer and illustrator for over a decade. I love drawing. I don’t know how to make that as clear as possible. I really love it. It has given me so many gifts. Discipline, perseverance, self-awareness, compassion.
It’s helped me get through grief and loss. And art-making can do that, because art-making is a vehicle for self-transformation. And it allows you, amazingly, to share that transformation with others. But it’s not the end product that transforms the artist. It’s the act of doing it. It’s the act of making the art.
In the age of automated art, we might still have art, but we stand to lose everything that there is to gain in the making. Now, these AI companies are going to say that their offerings are all of this. Ethical, compliance, fair. That’s all branding. They can just say those things. And the experience of artists with these systems so far is anything but that.
Consider the way that these models are currently trained off of our content. A text-to-image model needs to be trained off of millions to billions of example images. Now, these images are every kind of thing. Some of it is art. Some of it is photos. Some of it is innocuous. Some of it is more troubling. Much of it is people’s copyrighted creative content.
And all of it has been scraped from the web without consent, credit, or compensation being given to the creators, which is, that’s really something. Because it’s the quality of the training data that determines the quality of the model. The better the creative inputs, the better the creative outputs.
Now, usually, this is where I would go into some of the details about this training process, but I’m not going to foolishly attempt that in front of a room that probably has a lot of machine learning specialists in it.
Fortunately, the task of determining what the precise nature of this training is and whether it is legal, ethical, fair use, what have you, has recently fallen to the professionals. In the past two weeks, lawsuits have been filed against some of these companies. One is a suit brought in the U.K. by Getty Images against Stability AI, and the other is a class action lawsuit filed in the U.S. by artists.
Now, that’s very important to see, to see artists coming together and collectively advocating for their content, their data, their passion, and their livelihood. We’re going to need a lot more of that. These cases are going to give these complicated questions of copyright and fair use their day in court, which is desperately needed, because our copyright laws and our text and data mining exemptions were not made with these systems in mind.
Of course not. This is some sci-fi stuff that we’re dealing with. But whatever happens in court, we as a society and machine learning as an industry need to establish the legal and ethical groundwork that is going to govern all emerging systems of this sort. If we don’t, if we allow the appropriation of everybody’s creative work for the benefit of technology that is just going to turn around and compete directly against them in their very markets, we are going to do untold damage to the vigor and energy that people have for their work.
People like me and my students and my friends and you, because you don’t know what they’re going to automate next. I mean, they came for art sooner than anybody would have thought was possible. My peers have had the dismaying experience of finding their work in the data sets that is being used to train these systems that are going to undermine them in the marketplace forever.
Some of them have found that their names are being used on the prompting side to elicit work specifically in their style. Like Greg Rutkowski, whose name has been used in prompts hundreds of thousands of times against his will. And then there are those like Sam Yang, who has had fine-tuned models trained on his work and released that specialize in replicating his look.
Now he has these models floating around, putting out images in his style, and the models have his name attached to them. I don’t think any of us could imagine what that feels like unless we’ve been put in that position ourselves. To have all of your hard work turned against you like that, to think that every time you share your creativity, you are tacitly complying with your own replacement, this is a suffocating atmosphere to ask artists to live in.
And it’s clear that they won’t. I hear it from the students. I get messages from them every day sharing their disbelief and discouragement with these developments. They worry that they’re going to teach an AI system their style before they’ve had a chance to do anything good with it in the first place. I wish I could tell them that that wasn’t going to happen. So they’re retracting. They’re hiding away.
They’re not posting their work online. They’re going into closed communities. They’re moving behind paywalls. This is pretty antithetical to the spirit of art as we know it. And in this day and age, we need to be connecting with people more, not less. And the void that artists will leave behind can be filled a trillion times over by AI. It can produce work at an unprecedented rate. And whatever it may currently lack in finesse or specificity, it can certainly make up for in sheer volume.
I actually expect that the people currently prompting these systems will soon find themselves to be the all-too-human in a process that could run much faster without them. In October of last year, I posted a video on my YouTube where I talk about all of this. Don’t read the comments on that one, not good. And in it, I speculate that these image generator systems could be combined with text-generating systems to remove the need for human prompting.
In November of last year, that very next month, OpenAI released ChatGPT, their shockingly naturalistic text generator. I mean, you know the one, the one that I use to write this entire speech that I’m giving right now. I’m just kidding. Man, that would have been messed up, right? I could have done that. I didn’t do that. No. I didn’t do that. No. That was just a joke.
Anyway, yeah. It wasn’t long before people found out, yeah, you could use ChatGPT to write your prompts for you. Yeah. Oh, no, you need a picture of a person in a boat but don’t know what to write to get a sophisticated generation? Just ask ChatGPT to write you a gussy prompt and paste it right in. You don’t need a person to write a prompt.
It might just seem like you need a person to know what to want in the first place. I mean, there’s something we’re all good at, wanting things. You don’t even need to know what you want. We live in a society where companies are happy to tell you what to want.
Facebook, Google, YouTube, they’re investing billions in developing this ever-clearer picture of us through the one-way mirror of their services. Every day we tell them more about what we want to see, hear, play, buy, everything. It won’t be long before those data streams are plugged into the generative AI models. And then those combined systems will just automatically emit art, images, music, film, VR experiences, about every true and false thing, about every global and personal event. And they’ll just pipe them directly at you.
What are we going to do when within these tailored content streams we can’t tell what was made by a person and what was made by a machine? After a while, would you even keep trying to figure it out? Wouldn’t you just start to assume that everything was made by a machine? That’s a bad day for everyone, not just for artists. Is that the world that you want to live in?
Do you think that you should be forced to contribute to that world against your will just because you shared your work online? Clearly not. Models that automate creative work by training off of creative work need to be built on ethical grounds that honor the rights of content creators and that allow them to opt into systems that they’re interested in rather than scramble to opt out.
I also want to say that every noble, exciting, and fantastical thing that we can hope to do with the current crazy models, we can hope to do with an ethical version of them. I know people want these systems. I do think some version of these systems is part of our future. But we need to answer the tremendous question of how we’re going to integrate them into our work and our lives without supplanting ourselves and each other. We need to figure out who exactly we think the future is for. I think it’s for people, not for machines.
So what can we do? Well, first off, we need to acknowledge that we have power. These systems would not exist without our data and our content. And if the people making these things don’t want the valuable data that they use to train their products to up and vanish, they’re going to have to help keep the markets healthy.
So resist, speak up, tell these companies and lawmakers and the websites that you use that you want your data and your content protected. This is an opportunity for artists, creatives, people of all sorts to come together and defend each other. We have never had such a desperate need for collective action. The very heart of creation could be on the line.
And if we really want the future to be aligned for our benefit, we need to remember why we got into art in the first place. Because we get an experiential joy from doing it, from making it. Is that something that we are really ready to give up to machines today? I’m not.
Thank you.
SUMMARY OF THIS TALK:
Steven Zapata, in his talk “The Problem With AI-Generated Art,” raises critical concerns about the impact of generative AI systems on art and society. Here are the key points from his talk in a summarized form:
- Technological Intrusion in Personal Life: Zapata begins by illustrating a future where AI intervenes in personal life events, such as showing images of deceased pets in heaven or simulating conversations with a deceased relative. This raises ethical concerns about privacy and emotional manipulation.
- Generative AI Systems and Personal Data: He emphasizes that the technology needed for such interventions already exists. Generative AI systems use personal data, which we often provide unknowingly through online interactions, to create highly tailored and potentially invasive content.
- Mission of AI Companies and Data Use: Zapata criticizes AI companies like OpenAI (makers of DALL-E and ChatGPT) for using personal data to potentially replace human roles. He questions the ethics behind such practices.
- Impact on Art and Artists: As a visual artist, Zapata expresses concern about automated art generation tools like MidJourney, DALL-E, and Stable Diffusion. He argues that they undermine the transformative experience of creating art, reducing it to a mere product of automated processes.
- Legal and Ethical Issues in Training AI Models: Zapata highlights the problematic nature of training AI models using data scraped from the web without consent, credit, or compensation to creators. This practice raises questions about copyright infringement and fair use.
- Lawsuits and Artist Advocacy: Recent lawsuits against companies like Stability AI signify artists’ collective action to protect their rights. Zapata sees this as a crucial step in addressing legal and ethical challenges in AI art generation.
- Risk of Diminishing Human Artistry: He warns of a future where human creativity is overshadowed by the volume and speed of AI-generated art. Artists may retreat from public platforms, limiting the shared cultural experience of art.
- Convergence of AI Systems: Zapata speculates about the future integration of image and text-generating systems, which could further reduce human involvement in the creative process.
- Societal Implications: The talk raises concerns about a future where it’s hard to distinguish between art made by humans and machines, potentially devaluing human creativity and expression.
- Call for Ethical AI Development: Zapata argues for the development of AI systems on ethical grounds that respect creators’ rights. He urges collective action to influence companies and lawmakers in protecting data and content.
- Power of Collective Action: He encourages artists and the public to demand protection of their data and content, emphasizing the power of collective resistance and advocacy.
- Reaffirming the Value of Human Creativity: Finally, Zapata stresses the need to remember the intrinsic joy and transformative experience of creating art, which should not be readily relinquished to machines.
Throughout his talk, Zapata articulates a passionate plea for mindful and ethical development of AI technologies, especially in the realm of art, to ensure they complement rather than replace human creativity and expression.
Related Posts
- Transcript of JD Vance’s Commencement Speech at the U.S. Naval Academy – 5/23/25
- Transcript of This Is What the Future of Media Looks Like: Hamish Mckenzie
- Transcript of Elizabeth Banks’ Commencement Speech At the University of Pennsylvania
- Transcript of Jon M.Chu’s Speech At USC Commencement 2025
- Transcript of Emotional Intelligence: From Theory to Everyday Practice – Marc Brackett