Here is the transcript of Sasha Luccioni’s talk titled “AI Is Dangerous, But Not For The Reasons You Think” at TED conference.
In her TED talk, “AI Is Dangerous, But Not For The Reasons You Think,” Sasha Luccioni, an AI researcher, discusses the immediate, tangible impacts of AI, rather than hypothetical future risks. She highlights the environmental cost of AI models, emphasizing their substantial energy consumption and carbon emissions.
Luccioni also addresses issues of copyright infringement and the unauthorized use of artists’ and authors’ works in AI training. She delves into the biases inherent in AI systems, such as racial and gender biases, and their real-world consequences. Lastly, Luccioni advocates for the development of tools to understand, measure, and mitigate these impacts, urging for a more responsible and sustainable approach to AI development.
Listen to the audio version here:
TRANSCRIPT:
So, I’ve been an AI researcher for over a decade. A couple of months ago, I got the weirdest email of my career. A random stranger wrote to me saying that my work in AI is going to end humanity. Now, I get it, AI is so hot right now.
The Impact of AI in Society
It’s in the headlines pretty much every day, sometimes because of really cool things like discovering new molecules for medicine or that dope Pope in the white puffer coat. But other times, the headlines have been really dark, like that chatbot telling a guy he should divorce his wife or that AI meal planner app proposing a recipe featuring chlorine gas. And in the background, we’ve heard a lot about doomsday scenarios, existential risks, and the singularity, with letters being written and events being organized to make sure that doesn’t happen.
Now, I’m a researcher who studies AI’s impacts on society, and I don’t know what’s going to happen in 10 or 20 years, and nobody really does. But what I do know is that there are some pretty nasty things going on right now, because AI doesn’t exist in a vacuum. It is part of society, and it has impacts on people and the planet.
AI’s Environmental Impact
AI models can contribute to climate change. Their training data uses art and books created by artists and authors without their consent. And its deployment can discriminate against entire communities. But we need to start tracking its impacts. We need to start being transparent and disclosing them, and creating tools so that people understand AI better, so that hopefully future generations of AI models are going to be more trustworthy, sustainable, maybe less likely to kill us, if that’s what you’re into.
But let’s start with sustainability, because that cloud that AI models live on is actually made out of metal, plastic, and powered by vast amounts of energy. And each time you query an AI model, it comes with a cost to the planet. Last year, I was part of the BigScience initiative, which brought together a thousand researchers from all over the world to create Bloom, the first open large language model, like ChatGPT, but with an emphasis on ethics, transparency, and consent.
The Energy Cost of AI Models
And the study I led that looked at Bloom’s environmental impacts found that just training it used as much energy as 30 homes in a whole year and emitted 25 tons of carbon dioxide, which is like driving your car five times around the planet just so somebody can use this model to tell a knock-knock joke. And this might not seem like a lot, but other similar large language models, like GPT-3, emit 20 times more carbon. But the thing is, tech companies aren’t measuring this stuff. They’re not disclosing it. And so this is probably only the tip of the iceberg, even if it is a melting one.
The Trend of Increasing AI Model Sizes
In recent years we’ve seen AI models balloon in size because the current trend in AI is “bigger is better.” In any case, we’ve seen large language models in particular grow 2,000 times in size over the last five years. And of course, their environmental costs are rising as well. The most recent work I led found that switching out a smaller, more efficient model for a larger language model emits 14 times more carbon for the same task, like telling that knock-knock joke.
Current Tangible Impacts of AI
And as we’re putting these models into cell phones and search engines and smart fridges and speakers, the environmental costs are really piling up quickly. So instead of focusing on some future existential risks, let’s talk about current tangible impacts and tools we can create to measure and mitigate these impacts. I helped create CodeCarbon, a tool that runs in parallel to AI training code that estimates the amount of energy it consumes and the amount of carbon it emits.
Using a tool like this can help us make informed choices, like choosing one model over the other because it’s more sustainable, or deploying AI models on renewable energy, which can drastically reduce their emissions. But let’s talk about other things because there are other impacts of AI apart from sustainability. For example, it’s been really hard for artists and authors to prove that their life’s work has been used for training AI models without their consent.
Addressing Copyright Issues in AI
And if you want to sue someone, you tend to need proof, right? So Spawning.ai, an organization that was founded by artists, created this really cool tool called “Have I Been Trained?” And it lets you search these massive data sets to see what they have on you.
Now, I admit it, I was curious. I searched LAION-5B, which is this huge data set of images and text, to see if any images of me were in there. Now those two first images, that’s me from events I’ve spoken at. But the rest of the images, none of those are me. They’re probably of other women named Sasha who put photographs of themselves up on the internet.