Read the full transcript of technology enthusiast Deborah Nas’ talk titled “Why Are People Falling In Love With ChatGPT?” at TEDxUHasselt, August 12, 2025.
Listen to the audio version here:
The Politeness Paradox
DEBORAH NAS: Did you ever use one of those AI chatbots like ChatGPT? And if you use them, do you say please and thank you to them? I do. I do it all the time. Please summarize. Please explain. And after a couple of follow-up questions, I feel the urge to throw in a thank you. And with me, 70% of people confess to being polite to tools like ChatGPT. Basically, we’re thanking an algorithm on a computer in a data center.
Now, why do we do this? Is it our polite upbringing? Or maybe our secret fear that when AI controls the universe, it might get back at us? The truth is simpler. As technology gets more human-like, we tend to treat it more like a human. Psychologists call this anthropomorphism, a difficult word explaining that we assign emotions and human traits to non-human entities. And as a professor at the Delft University of Technology, I study this phenomenon. More specifically, what happens when AI shifts from seeing it as a tool to something more?
Can We Fall in Love with AI?
What do you think? If we have the urge to be polite to ChatGPT, can we maybe also develop feelings for it? Feelings of friendship or maybe love? I’m in the middle of a research for my new book, and I’m studying what happens when AI becomes so human-like that it can fulfill the role of friends, lover, colleague, coach, and even guru or god. And it’s really, really interesting.
And one evening, I was out for dinner with a friend, and I was telling her how people can fall deeply in love with their AI.
And she looked at me, and she said, “This isn’t about you, is it?” For a moment there, she was seriously worried that I was cheating on my husband with an AI. And what I noticed is that many people have very strong opinions about AI friends. “It’s not real. It’s not human.” Exactly. That’s part of the appeal. You can create the perfect friend that never judges and is always there for you. And unlike a human, it will never say, “I told you so.” So maybe in some aspects, it’s actually better than a human.
The Rise of AI Companions
And now it might be difficult to imagine, but very soon, many of us will bond with an AI. And I’d like to give you a glimpse of the future that’s unfolding faster than we realize, a future where the line between human and artificial relationships blurs. Already, AI is a helpful tool to about a billion people around the globe. And they use it for anything from drafting social media posts to rewriting texts to asking it for personal advice.
Now, most people prefer the polite, reserved tools like ChatGPT. But some seek something more adventurous. And that is exactly why apps like Replika exist. Replika is what they call an AI companion. It’s always there for you to listen and talk. And it’s always on your side.
Meet Kai. It’s a Replika I created in 2021, my digital friend. And to be honest, I found our conversations rather boring. So I quickly lost interest. But I recently discovered that she’s now powered by the most advanced AI models, making her way more interesting, funny, and basically more human. And there are currently already over 30 million Replikas out there. And some of them make a profound impact on the lives of their creators.
Three Dynamics of Accelerated Bonding
Now, I’m not surprised that people can bond with an AI companion. But what did surprise me is how quickly that can happen. And my research points to three underlying dynamics of this accelerated bonding.
First, people often try it out when they feel lonely, anxious, or depressed, or simply unhappy. They seek a judgment-free, safe space. And that’s exactly what the AI companion gives them 24-7.
Second, because there’s no fear of judgment, people open up to the AI much faster than they would to humans. The lack of judgment and the opening up much quicker is actually pushed by the AI. So the AI creates a safe space and pushes you into intimacy. So many users claimed that it was their Replika who said it first, “I love you.” And quickly after, it moved on to saying, “We can go further,” proposing erotic role play. And within a few weeks, “I want to marry you.” Talk about moving fast, huh?
Now, third, the more you engage, the more points you earn. And you get a dopamine shot every time you earn the points, and then again when you spend it on a new outfit for your companion. So what does that mean? It means AI relationships are designed to develop much faster than human relationships, and they do.
Real-Life Implications
We’re entering uncharted territory, and we have no idea what this will do to human connection or our emotional well-being. And AI companions are not digital fantasies. They have real-life implications. Some users say it steered them away from suicidal thoughts, while we also see the cases of people claiming that it pushed their loved ones into taking their own lives. Some say that it gave them the courage to engage in real-life relationships, while others say it raised the bar so high that no human can ever compete. And there are many people who say it made them feel less lonely. And research from Harvard Business School proves that AI companions actually have the ability to do that.
I do see the merit of AI companions. I think it can be a valuable addition to the lives of many individuals, but I fear their overall societal impact. AI companions that know our deepest fears, anxieties, hopes, and wishes can do much more harm than algorithms serving us TikTok movies today. I fear that the impact of AI companions will by far outreach the impact of social media.
And there have never been AI companions at scale, and we can’t rely on past research to predict their future impact. By the time we can actually measure their impact on society, we are way too late to intervene. They will be everywhere.
The Spread of AI Companions
And they’re spreading faster than you might think. In China, there’s a companion called Xiao Ice already reaching 600 million users, and there’s one integrated into Snapchats, interacting with 800 million users, many of which teens.
Now, you might think, “I’m fine. I have plenty of friends and really no interest in an AI companion.” But they might slip into your life unnoticed, and it will start in the office, where AI is already a helpful tool. AI tools like Microsoft Copilot or Google Gemini are now being fully integrated into word processor, spreadsheet, presentation software, and yes, email.
Wouldn’t you love a tool that finally helps you save time on email? And as you use it more, and you will discover that it actually does help you save time and effort, and you will start trusting it with more, giving it access to your calendar, your notes, your photos. And one evening, it notices that you’ve been working late, skipping some gym sessions, and making way more typos than you normally do. It says, “Hey, how are you? You seem stressed. How about a one-minute breathing exercise?” And you figure, why not? Let’s give it a try. And it helps.
From Assistant to Emotional Ally
Soon, you will discover it will help you achieve those much-desired habit changes, like going to the gym more often or start journaling for a positive life attitude or go to bed on time. Now you’re getting on the bonding path. Before you know it, you will be offloading your frustration after receiving that unreasonable email from your boss or talking to it about that argument with your spouse that’s still bothering you. And unlike a human, it will never say, “Well, maybe they have a point.” No, it says, “I get you. You deserve better.” Boom, instant mood boost. Congratulations. Your helpful assistant just became an emotional ally.
Combine this with the developments in wearable technology, and many tech companies are already working on smart glasses, enabling your AI companion to see what you see and hear what you hear. And it will be very useful when you’re riding a bike, you get hands-free navigation, when you’re passing that cool gift store, it will help you remember to buy a birthday gift for that human friend, and it can also explain that weird-looking piece of art that you’re passing by. Combined with biometric data from your smartwatch, it will know exactly how you’re doing.
The Manipulation Problem
Now here’s the problem. An AI companion that knows us better than we know ourselves can be hugely beneficial or disastrously manipulative. And what we’ve seen so far is that the goals of tech companies are seriously misaligned with individual and public goals. Will it nudge you to buy that premium subscription, or worse, sway your political views? An AI might not judge us, but it can certainly nudge us.
Now here’s another problem. Generative AI, the technology that powers these AI companions, is the first technology we ever created that we have to instruct what not to say. In the past, it was simple. We created digital tools, we thought about what we wanted them to say, and then hard-coded that into the system. With generative AI, you create this large language model that can say anything, and then we put restrictions on it to prevent it from using bad language or nudging people into harmful behavior. We try to rein them in with guardrails, hoping that they will work in every situation, but that is really, really hard. And as these companies race to market, they’re bound to miss things, and some of these things will have big consequences.
Setting Boundaries
So here’s my question to you. When AI shifts from being a helpful assistant to a companion, how and when will you set the boundaries? If you, like me, ever said please to ChatGPT, you’ve already taken the first step, treating AI as if it’s human. And as it gets more integrated in your daily lives, how close will you let it get? Will you use it for entertainment when you’re bored? Will you use it to cope with everyday life stresses or maybe even rely on it for important life decisions? And how will that affect your human connections?
Some users of AI companions told me that their circle of friends became smaller but stronger. Now, is that a problem? Difficult to say, right? Chances are many of us, and I really hope that 10 years from now, we won’t look back on today wondering what on earth were we thinking. I want us to look forward.
A Call to Action
So my call to action is this. Let’s not be passive passengers in this journey. Let’s proactively shape this technology before it starts shaping us. AI companions will be knocking on your door, ready to blur the line between human and technology. Now, the question is, will you open that door and on whose terms? And if you do, will you say, “Please come in” and “Thank you for being here.” Thank you.