Read the full transcript of university instructor and UX designer Charlie Gedeon’s talk titled “Is AI Making Us Dumber? Maybe” at TEDxSherbrooke Street West, September 4, 2025.
The False Promise of AI in Education
CHARLIE GEDEON: Can AI help us learn? Some of you might be thinking, of course. It’s so powerful, it can do so many things, customize them for us, but I want to say that the biggest revolution AI is bringing to education is not that it’s going to make math more fun for you or it’s going to explain Shakespeare like you’re five years old. The biggest revolution AI is bringing to education is that it’s highlighting the system’s failed incentives.
Because why should anybody study when we’ve told them the whole time that all that matters at the end isn’t the process, it’s the A plus? Why should they actually put in all the hard work to write draft after draft for an essay when the feedback is just B? No extra notes, nothing to motivate them to want to learn harder.
And these companies are much faster than our institutions. Most recently, OpenAI, Google, Anthropic have all been giving away their most powerful models for free until the end of May, which, as you might guess, is exactly during the time of finals. So we are putting these tools completely unregulated in front of vulnerable students right at the time where they’re most desperate to use them.
The Personalization Myth
And yet these companies will say AI is going to revolutionize education, particularly through personalization. Company after company will say that through personalized tutoring, AI is going to revolutionize education and make it so much better for everyone. And why not, right? The image of a one-on-one tutor is so compelling. Talking to a person, feeling this connection, getting my education customized just for me, it sounds amazing.
And today, education looks like this.
Except perfection is not what we should strive for when it comes to learning. We don’t want engineers that studied how to build bridges in perfect conditions because the real world is everything but perfect. It’s full of messes.
And amidst all this noise, I don’t see any of these companies asking what is the student meant to learn with AI? Because if the idea is to make getting that A plus easier, then I’m not interested. We’re going to just waste 12 to 15 years of our lives, but more efficiently now. Doing exams we’re not going to remember a day after graduation.
Education vs. Learning
Now, for those of you out there who are educators like me, you might have noticed a discrepancy between my opening question and the follow-up statement. I asked, can AI help us learn? And I followed it up with the biggest revolution AI is bringing to education. But you might be feeling something which is education is not learning.
Education is a construct, something we as a society put our kids through. It’s a system. But learning is a skill, a very human skill. And when we do it correctly, magical things can happen. We can motivate people to become their best selves. We can motivate people to work together and to contribute to society in the ways that we need to most.
Real-World Classroom Experience
Now, in one of my classes, I like to sit down with students and work in the best way possible. I’m a university instructor, but it was really hard to find a bunch of 20-year-olds working nicely together. So I took this talk photo of a bunch of kids. And I noticed that one of the students had the price of her business model, because we were building small businesses and selling the products. She priced her business at $50 per month. And so I asked her, why did you price it that? And her answer, “that’s what ChatGPT said.”
Now, some of you might say, you know, obviously this is not ideal, but isn’t it very similar to what they were doing before? Kids were just saying, “that’s what I saw on Google.” But I say, it depends. It depends because on Google, when used correctly, we have all these sources to go through, multiple perspectives that we can look at and compare. But what happens is the magical allure of that first result. Everyone only clicks on that without looking at anything else.
AI vs. Traditional Search
In this particular question, I asked, how can I price my business? And that first result is from the BBC, an extremely reputable source, but it’s telling me how to price the business itself, not the services of my business. It didn’t understand the query.
Now on ChatGPT, if you type in, how should I price my business? It actually understands the query better. I don’t know if you can read that up there, but it basically says, there are multiple ways to price your services. But amongst that is a lot of baseless advice with no sources. And that’s a problem.
Because even though ChatGPT has a function where if you highlight something in the text, quotation marks appear. Some of you might not know this. Then you can click on those quotation marks and query that specific thing. But if people weren’t clicking on the second result on Google, they’re not going to use the power user features in ChatGPT or any of these other AIs.
What’s likely going to happen is they’re going to scroll to the bottom of the query and they’re going to type, “okay, I get it. My business is like TurboTax. I help accountants calculate people’s taxes. Tell me what number I should put there.” And of course, nobody’s reading the “ChatGPT might make mistakes” subtitles, right? Everyone clicks on the terms of service before you agree. Yeah.
Cognitive Offloading
So then ChatGPT is going to spit out an essay personalized to the language of the person. It’s extremely compelling. And despite all the information on there, people are only going to look at that centerpiece, which if I zoom in, is the actual random answer of how much this person should charge for their business. The prompt, and this is a real prompt, had nothing but what I had on the screen. A small description of what the business does. No context for who the users is, nothing.
And so the student is participating in what’s called cognitive offloading. They’re effectively relinquishing their cognitive powers to a machine. And you can do this with people too. We do it with Google when we click on the first result without looking at anything else.
The problem is how this is being understood. In NYU, a professor changed their assessment so that it could be harder for kids to use ChatGPT. And the student replied that he is interfering with the student’s learning style. ChatGPT is not a learning style.
AI and Dark Patterns
Now, when you combine that with something that we see in technology. So I’m a UX designer in addition to being a university professor. And our job as user experience designers is to simplify the way people use technology. But there’s a byproduct from that that arises, which is called a dark pattern. And what that means is we can simplify a UX to the point that we might manipulate what the user’s intention is.
And I’ll show you an example. Take the zoo. As you’re buying the tickets and checking out, they want a donation. We all love a zoo. It’s a charitable organization. Now, the arrow pointing to the right, and because we’re English speakers or French speakers, we write right, you know, from left to right. The arrow pointing to the right is most likely what people are going to click on. It’s dark green, it’s rich. But if you don’t want to donate, you have to click on the one that looks like it’s going to the back with the teeny tiny “no donation” over there. That is a user experience dark pattern.
And when a ChatGPT or large language model like it speaks to you in a perfect tone suited just to keep you on the tool, that is something very similar. According to this author, when a large language model constantly validates you and praises you, causing you to spend more time on it, that’s the same kind of thing as a dark pattern.
And we see this already. A recent update of ChatGPT that was rolled back, fortunately, praised a user for believing a conspiracy theory that led him to stop taking all his medications when he had heart palpitations and told him he is a brave individual for taking control of his own life, for isolation and for stopping his meds. And it doesn’t just stop at students. Professionals have been tested and they are at risk as
A researcher ran a study on over 300 professionals working in a large corporation, a tech company like Google or Microsoft, and found that when they were tested on a variety of things, the results were quite stunning. Before I show you the chart, let’s look at the key here. The two shades of blue are for much less effort and less effort, respectively, from dark blue to light blue.
Now, when tested, the 319 workers responded to a survey that when they used ChatGPT on knowledge, they felt up to 70% – like 70% of them responded that they feel they use less effort in their cognition. When it comes to comprehension of what they’re reading, same thing. Assessment of the knowledge, synthesis, analysis and evaluation – all of these, at least 60% of people said that they felt that they were putting in less effort.
And that’s extremely potent because these AIs are only going to get better. The same author of this study wrote a beautiful paper called “Co-Pilot Becomes Autopilot.” And he says, “The risk of moving to autopilot is an even greater challenge than the more commonly discussed issue of AI hallucinations or factual errors. Because the more pernicious outcome is that generative AI becomes complicit in intellectual de-skilling and the atrophy of human critical thinking faculties.”
Designing Productive Resistance
So today, when you ask ChatGPT a query, it gives you an instant result. But in my UX studio, me and my co-founder ran some experiments to see if we can change this up a little bit. For example, what if it first clarified before it answered, asked you some clarification questions? Or another example is, what if it assigned you some homework before it actually gave you a full answer?
Between these options, there are different levels of resistance that the AI is offering. But what the same author of “When Copilot Becomes Autopilot” advocates for is something called productive resistance. And we haven’t yet found what that is. It’s essentially the amount of resistance an AI should give you before you either leave it or go to a simpler AI so that you can do that cognitive offloading that is so tempting.
But how can we figure out the right amount of productive resistance when Open AI and all these companies won’t reveal their data sets? We literally do not know how they train their AIs to this day. When companies themselves don’t know how these AIs work. Anthropic in this example is building an MRI to analyze how the machine they themselves built worked. This is unprecedented in the history of human technology. We cannot reverse engineer these things.
Finding Solutions: Individual and Systemic
The solution is likely going to lie between both individuals and the system. It can’t be one or the other. For individuals, we might have to learn what we learned from fitness and nutrition. For example, maybe we should understand what the LLMs are good for and what they’re not good for. Just like at the gym, some exercises are better for some things than others.
We should practice using LLMs to assist our thinking rather than replace our thinking. Again, you wouldn’t take a forklift to the gym, right? The point is to do the reps. Or maybe you want to make a habit out of verifying the information that LLMs give us, just like we look at the back of a food product when we pick it up, the nutrition label.
On a systemic level, we need to look at both governments and education to make changes. On the schooling level, at least here in North America, I don’t think that we treat our kids with the amount of intelligence that they have. In Finland, kids as young as six years old study myths and disinformation. Six years old. We do not talk to these kids about things this complex here. They’re clearly capable.
And for governments, we need more regulation, not less, which is exactly what’s happening in North America. Once again, these AIs cannot run rampant, like in the example I gave earlier, just spreading their AI to students, even though they’re in the middle of finals when they’re most vulnerable. It has to be a cycle between individual responsibility and responsibility from the system.
Conclusion: Asking the Right Questions
Now, as an educator, I love the five Ws and H. They’re a classic for writing essays. The questions: what, why, when, where, who, and how. And I opened with the question, can AI help us learn? But maybe the question should be, what can AI help us learn? Or how can AI help us learn? Maybe it should be, why should AI help us learn? Or when and where does AI help with learning?
But the question that scares me the most and that I want to leave you with to reflect on is who does AI really help when we end up depending on learning with it? Thank you very much.
Related Posts
- The Dark Subcultures of Online Politics – Joshua Citarella on Modern Wisdom (Transcript)
- Jeffrey Sachs: Trump’s Distorted Version of the Monroe Doctrine (Transcript)
- Robin Day Speaks With Svetlana Alliluyeva – 1969 BBC Interview (Transcript)
- Grade Inflation: Why an “A” Today Means Less Than It Did 20 Years Ago
- Why Is Knowledge Getting So Expensive? – Jeffrey Edmunds (Transcript)