Editor’s Notes: In this insightful talk from SXSW, futurist Sinead Bovell explores the profound impact of artificial intelligence on the future of education and the workforce. She delves into how schools must evolve beyond traditional job preparation, emphasizing that the most critical skills for an AI-driven world are deeply human ones like critical thinking, creativity, and ethics. Bovell also addresses the challenges of “cognitive outsourcing” and provides a roadmap for how educators and parents can prepare children to thrive in an increasingly complex and automated world. (Feb 19, 2026)
TRANSCRIPT:
What’s Changing, What Kids Must Learn — Sinead Bovell @ SXSW
SINEAD BOVELL: What should kids be studying and learning today for a future that’s going to be transformed by artificial intelligence? And how does the institution of education need to evolve to meet the moment?
In the spirit of South by Southwest, I’m going to share a talk that I delivered on this topic where I go into detail on some of these ideas — from where we can expect this technology to take us, to the skills kids in school today need to be cultivating, and the system-wide change the institution of education is going to need to undergo to prepare kids properly for this world.
I’m Sinead Bovell, and this is I’ve Got Questions.
NATALIE MONBIOT: I’m super delighted to be moderating this conversation with the fabulous Sinead Bovell. Sinead, tell us what it is to be a strategic foresight advisor and your lens on AI and the future of education.
Education as the Bedrock of Society
SINEAD BOVELL: Education is the bedrock for a healthy democracy and for a functioning society. And it’s not just essential for things like economic mobility and economic security, but fairness as well, and for well-being.
I believe there’s no such thing as a state that over-invests in children’s future. An investment in children is an investment in national interest. You want to foster an informed and adaptive citizenry that cannot just safeguard the future, but thrive in it — especially a future that’s going to be as complex as the one children in school today are entering into, which will be shaped by quantum computing, genetic engineering, artificial intelligence, and commuting back and forth to space.
This is an incredibly complex world they’ll be entering into. So the more that they can understand it, the more we can support them in that journey, the better equipped they are. And that’s an investment in a country’s economic security, in our collective health and well-being, and our overall national security interests.
NATALIE MONBIOT: Couldn’t be a more pressing topic. And before we dive into some of the finer points, where would you say — taking a step back — where are we at this moment with AI?
Where We Are With AI Right Now
SINEAD BOVELL: If I were to say where we are, it’s very, very early. Maybe it’s 1992 — the Internet has dropped, companies are experimenting. We know it’s maybe going to be a big deal, but there’s still a lot of doubt. Some people are playing around on it, but we have yet to fully comprehend the way it is going to fundamentally transform our world. The Googles, the Apples, the Amazons of the future have yet to be invented, but they’re coming.
Artificial intelligence is also a general purpose technology — similar to something like electricity. Think of how pervasive electricity is. We don’t even think about it at all. It’s so foundational that it’s moved into the background. We will soon be streaming artificial intelligence the way we stream electricity. That is going to be a fundamentally different society to live in.
These general purpose technologies take time to get so entrenched in society. But you know when it’s reached that point, because when people can’t access general purpose technologies — whether that’s at a country level or in certain neighborhoods — we deem that wildly unethical. Who doesn’t get fair access to the Internet? Who doesn’t get access to electricity? That is the path artificial intelligence is on.
NATALIE MONBIOT: So if artificial intelligence is going to be this general purpose technology and fade into the background, what does AI and education look like now? How should educators be considering AI in education, given that it will eventually be in the background, but today we’re at this very early phase?
Three Pillars of AI in Education
SINEAD BOVELL: There are three pillars that are related but distinct in terms of how we should be thinking about AI in education.
The first pillar is safe adoption for kids and for learners. This means equipping kids with the tools to navigate artificial intelligence, because they’re going to be on these tools at home regardless. They have supercomputers in their pockets, supercomputers on their iPads. So giving kids the skills to utilize these tools safely — conversations like, “AI isn’t your friend.” Your chatbot isn’t something that you tell secrets to. This is what we do or don’t share with artificial intelligence. And this is also how you ask it good questions and validate its answers.
The second pillar is how do we more urgently adjust what we are teaching in school — or just the formula for what happens in the classroom versus what happens at home — knowing that kids are going to be leaning into these technologies at home to do homework and to complete assignments.
The third pillar — and this is where I think we are rushing into, but this is actually the long-term game — is how do we fundamentally redesign the entire system of education for the age of artificial intelligence.
What seems to be happening in this moment is we are merging all of those pillars in a sense of urgency, and this leads us to deploy AI in schools for the sake of feeling like we need to meet the moment by bringing AI into the classroom. And there are a lot of technologies that aren’t ready.
So I think we focus on pillar one — giving kids the tools to use these tools safely if they’re going to be using them on their phones.
NATALIE MONBIOT: I think we’ve heard this week a number of different ways that educators are experimenting with AI — different pilots, different ways of going about it. Sounds like we should be teaching it and educating about it. Is now the time to be experimenting with it in deeper ways?
SINEAD BOVELL: AI is a hard skill, so teaching it should absolutely be happening. Experimenting with it — yes. We need to be running these pilots, we need to be gathering the data as to what’s working and what’s not. But it has to be very, very intentional, and not just assuming that we can throw in an AI tutor somewhere arbitrarily and that’s going to be sufficient.
We also need to make sure that we’re not running social experiments that jeopardize learning outcomes for the sake of just feeling like we need to quickly meet the moment. These experiments are vital, they should be happening, but they need to be very, very intentional and very, very controlled.
NATALIE MONBIOT: I know that you’re working with Fortune 50 companies in this space and advising them on how to navigate AI and education. What are some of the data points and the advice that you’ve been giving them?
What the Data Is Telling Us
SINEAD BOVELL: It’s been really interesting to look at some of the data that’s coming through. We’re still very early in the age of using artificial intelligence in education, but there’s one clear trend that stands out.
I’m going to walk us through a study that I find particularly helpful — this one by the University of Wharton, the University of Pennsylvania, and Budapest British International School. It implemented artificial intelligence in math classes. They broke the class up into three groups:
- The **control group**, which used the traditional method of doing homework problems with a textbook.
- The **GPT base group**, which got uninhibited access to artificial intelligence.
- The **GPT tutor group**, which got access to an AI designed to guide them through problems — giving hints, but not the answers.
All students received the base lesson for math together, then broke out into their respective groups. The study showed that when it came to the practice problems, the children with uninhibited access to AI did 48% better than the control group. The students with access to the GPT tutor did 127% better than the control group.
But when it came time to test students without access to AI — the final post-unit test — the kids with uninhibited access performed 17% worse. AI had harmed the learning outcome. And the children who had access to the AI tutor performed at the same level as the children who had no access to artificial intelligence at all.
The conclusion of the study was that generative AI harms learning outcomes.
Then there was a second study that happened at Harvard. We have to control for the fact that self-directed learning is a little different at a university level, and clearly if you’re getting into Harvard, there’s some higher-order thinking already at play. But that aside — it was a physics class, broken into two groups.
The control group attended the traditional lecture with the professor, then broke out into peer groups, working with one another and with instructor-led guidance on solving problems.
The second group had no in-class lessons at all. The entire process was done with AI — but they specifically designed the AI to be self-paced, responsive to the student’s needs, providing immediate feedback on whether the student was on the right track or off it, offering motivation, and implementing learning best practices throughout. The system continued to adapt based on how students were evolving through the problems.
When they did the general test after those two experiments, the kids who went the AI pathway performed twice as well as their peers who didn’t get access to AI — and they were more motivated and more engaged.
Redesigning the System, Not Just the Tools
SINEAD BOVELL: What we can learn from just those two isolated studies is that you have to adapt the entire ecosystem. It’s akin to inventing electricity, but only swapping out where the steam engine was and putting a light switch there — not building out the entire assembly line and rethinking how we design the system.
That’s step one. Step two: immediate feedback is absolutely vital in AI learning outcomes. Gone are the days — if we’re going to incorporate artificial intelligence — where we wait for the unit test, midterms, or the end-of-year exam to see where students are. AI needs to extract data in real time. This is how someone is adapting, or this is how they’re falling behind. AI needs to provide that feedback, or else we lose visibility into how well things are happening.
Self-paced learning is also vital. If you go back to the high school study, everybody had an hour and a half to learn the math problem — whether they were working with a textbook or with AI. That hurt people doing the AI method and helped people doing the traditional method. Kids need to learn at their own pace, and the system needs to be able to adapt in real time.
Those are just a few of the key takeaways when we think about AI in education. But that’s why this is a longer-term redesign — and that redesign should not fall on teachers. That’s one thing we need to make very clear: this moment shouldn’t fall on teachers. They already have way too much on their plate. This is something for government heads and those designing curricula more broadly. And I think we’re getting that wrong.
NATALIE MONBIOT: Yes. I had the opportunity to sit on a few panels yesterday, and there were very understandably frustrated educators — “We had COVID, we’re in an underprivileged area, we’ve got so many pressures, and now we have AI to learn. We have to do this two-day course, then the pilot ends, and then we’ve got to go and do another two-day course. AI is the last thing we need.”
So it sounds like there is a right way to do it, and there are definitely ways that can be harmful. And some of those right ways involve a lot of structural redesign of how students actually engage with AI.
SINEAD BOVELL: This is like the institution of education — we need to approach it differently. It’s akin to asking the accountant to redesign the concrete and the bricks. That’s not what we should be doing.
And I know there was a school that you had been tracking, so I think it could be helpful to share some of the insights there.
AI in Education: Rethinking the Classroom
NATALIE MONBIOT: Yeah, absolutely. People familiar with Alpha School in the room? A few people. What it is — it’s a two-hour learning process where all the hard skills, all the knowledge that you need to learn at school happens in a two-hour period with an AI tutor. The experience is entirely personalized and adapted to where that student is at.
So if you walk around the classroom, you see different students working on completely different math problems, and as the system understands what that student is interested in, the math problems become contextualized within topics that they love. And critically — going back to one of the best practices, or mandatories, in AI being successful in schools — there’s real-time feedback on the performance of that child. They can see their own performance and actually start to own that journey for themselves, and they get everyone into the 99th percentile, no matter where they’ve started.
So I think that’s a fascinating way to look at it. Besides those two hours, some students are able to accomplish double, and some on the higher-performing end, five times more. But the critical part is freeing those students to focus on life skills, on EQ skills, on developing their own human ingenuity. From all the research that you’ve discussed, that’s just an amazing example of what’s happening today.
SINEAD BOVELL: Yeah, totally. And that’s kind of the moment we’re in, right? We’re in pilots and experimentation and innovation — really a redesigned, zoom-out wide lens. In some ways taking risks, but it should never hurt the learning outcome and it should never be a burden to teachers. Both of those things need to be true.
AI and the Cheating Crisis
NATALIE MONBIOT: Absolutely. So I did want to dig in a little more into some of the current challenges with AI that educators, students, and parents are experiencing today — which is around AI and cheating, using ChatGPT to get to the answer right away, and what impact that might be having on the learner experience and the point of being at school.
SINEAD BOVELL: Yes, absolutely. I think we’re in a bit of a crisis in this moment when it comes to artificial intelligence and cheating. We can talk about what happens when you short-circuit that thinking. But I think the safest assumption we have to make right now is that kids are going to be using artificial intelligence at home — so whatever happens past 3pm, expect that to be powered by a supercomputer in some way. We have to start there.
That means we have to change what we are doing in the classroom. In some instances, that means maybe we flip things — what happens at home happens in the classroom, but in other ways. For example, if you teach history, you give children the research portion and they can go home and do all the research with ChatGPT that they want. But the higher-order, critical thinking, deep learning, and discussion — all of that happens in the classroom.
So the classroom really has to be a place where the deep learning is happening, where the testing is happening, and where we’re raising the bar on knowledge. But we do have to assume everything past 4pm is likely going to be co-created by or outsourced to an AI system.
And then the broader, longer-term goal is that we’ve entirely redesigned the curriculum to account for the fact that kids can lean into supercomputers — because that is the actual goal. Kids in school today are going to step out into a world with advanced robots, with supercomputers that are polymaths. We want them to know how to engage with these tools and systems, how to utilize them, how to invent with them. And we’ll have to redesign education to account for that. We’ll have to make school harder because you do get access to these supercomputers.
In a somewhat superficial way, maybe that means people are learning about quantum computing at seven years old, because that learning is facilitated by a teacher and by a supercomputer. But that part is going to take time. So the more urgent redesign is flipping what happens in school versus what happens at home. What happens if we don’t do that? I think it’s quite obvious — we end up just short-circuiting the thinking. There shouldn’t be anything to cheat on, because what happens at home isn’t what we are evaluating. That is the baseline we need to move towards, and that’s what we should be doing more urgently.
Human Flourishing in the Age of AI
NATALIE MONBIOT: So many places to go from everything you just said. But maybe a positive example — outside of hard learning and an AI tutor helping you learn things at your own pace in the most individualized and data-rich manner — how can you use AI in a way that helps children become more human? What does human flourishing look like, and does AI have a role in that?
I think we need to learn how these systems work and how to use them. But we also need to ask: what should we as humans be focused on, given that we want to collaborate with AI? What are the subjects and skills that are deeply human?
Something I’ve been thinking about quite a bit recently is different types of knowing. There’s a cognitive scientist called John Vervaeke, and he plots out different types of knowing. The most academic, research-based, and fact-based learning is procedural knowing — and that’s the kind of knowing that AI is really good at, and is getting increasingly good at. But what AI doesn’t have is lived experience and deep insights that change you as you experience them in the world, change your perception of the world, and then change how you connect with others.
I’ve been really interested to hear some of the talks this week about experiential learning environments. I learned about Thinkery, which is here in Austin — it’s this interactive learning museum environment. It’s really interesting to think about what are those deeply human skills that we can be focused on teaching students, while they’re also learning what AI is and the best ways to collaborate with it.
The Skills That Matter Most
SINEAD BOVELL: Yeah, I think that’s vital. I just wanted to quickly go back to the cheating. Another thing we’ll probably need to do in the short term is introduce more pop quizzes and surprise tests. They don’t need to count towards grades, but they help us see where students are as we navigate this new territory. A lot of times we don’t have visibility into how much students are using AI, how much they’re cheating with it. We need to insert more chances for assessment and be tracking that data.
My philosophy is we should never assume AI will never be able to do something. The reality is we cannot predict the future — what jobs will be there, how advanced AI is going to get, and how quickly. That means we have to prepare kids for absolutely anything. Whichever way the future evolves, however quickly we get to the moon or start genetic engineering, kids need to be able to pivot, adapt, and think critically about the world around them. And most of those skills don’t actually have anything to do with technology.
Critical thinking is absolutely vital in the age of advanced technologies. Kids need to read more — read for the sake of reading — and read in a way that they can come back to school or to their parents and discuss the ideas, and have those ideas challenged. Kids need to play more. In the age of advanced technologies, the future Steve Jobs of the world are not going to come from a corporate cubicle. They’re going to come from people who have imagination, who can play freely, experiment, and work collaboratively.
Long-term thinking is also essential — getting kids to think beyond the immediate horizon, beyond just the unit test in chemistry or math. How could this impact things in five or ten years to come? And even cross-disciplinary thinking. Kids in school today are likely to hold 17 jobs across five different industries. They won’t be doing just one thing. So we have to get them to think: how does math connect to what I just learned in history, which might connect to what I do in philosophy or in English?
These aren’t even new skills — it’s just about centering them. Most of the most important skills for the future are ones we can foster for free. And that’s what we can sometimes miss in these moments where we feel like we have to lean into technology for the sake of it, when it’s actually the other skills we need to be doubling down on.
One lived example I do with my nieces and nephews constantly, since they were about six or seven — I theoretically introduce them to technology. I explain concepts in age-appropriate ways, like genetic engineering, and I ask them to interpret what that would mean for their world and their sense of ethics. So if we could theoretically make sure nobody gets sick in the world with these technologies, should we do that? But what if — to my nephew — it meant that all that basketball practice he does, somebody else didn’t have to do, because that same technology allows them to suddenly be really good at basketball? How should we think about that?
They engage in higher-order thinking. They’re exposed to longer-term concepts of technology without passively playing around on an iPad. These are the types of deep conversations and higher-order thinking that can happen in the classroom — and that teachers are uniquely positioned to deliver and facilitate.
When you think about a teacher, they don’t get enough credit for all the things they do. The curriculum is one small part of it. They are social workers, they are therapists, they know their children inside and out. Being able to go deep into these types of conversations — that’s what we also need to be focusing on. And I know it sometimes feels counterintuitive, because I’m a futurist and I spend most of my days in patents and technologies talking about robots, brain-uploading interfaces. Yet the most important skills for the future have nothing to do with technology.
A Confidence Crisis in the Classroom
NATALIE MONBIOT: And I want to go back to something you said. Technology for the sake of technology is absolutely not the right way to go about things — but learning for the sake of learning is. There were some really interesting insights this week about how schools and test-based learning do not set students up to enjoy or take pride in active learning. Students are encouraged and optimized to find the answer, get the answer right.
And even in a critical thinking class — cognitive scientist Christine Legare talked about this yesterday — even in a class where there is no right answer, what the students wanted was a rubric to get there. What she said to them was: “When you have a job in the real world, do you think you’re going to be given a rubric to discover the right path forward?” It seemed very telling. We’re at this acute moment where the way students are taught, and what they’re optimized for, is very much at odds with where we’re at right now with AI — a technology that is designed to give you the answer.
I actually talked to a teacher who works with 16 to 18 year olds. She said what she does to try and circumvent the use of AI in writing is have her students write in class, giving them a bit of a structure — a good framework for an essay. Then when it comes time to actually submit the essay, they go home and type it up. In a few cases, a student had essentially scrapped their in-class essay and completely generated a new one in ChatGPT. And what struck me was — there was no time saved, no cognitive load saved in doing that. What that says to me is that we’re in a confidence crisis.
The Risk of Over-Reliance on AI
SINEAD BOVELL: And this is potentially really detrimental to society more broadly. Not just kids, but all of us — that we become so reliant on these technologies, we stop believing in our own ability to make decisions. And no matter how good technology gets at something, there will be times when we have to deviate from the technology’s advice, and we have to make sure we are ready for all of those moments.
You might even hear people talk about optimizing every aspect of your life with artificial intelligence. And I somewhat take issue with that, because if writing that email is the one time in the day where you think deeply, you move through your ideas, you have to structure what you want to say — and you pass that to an AI — unless you are replacing that time and that thinking with something else, that’s a dicey bridge to be walking down.
There was a recent study — I believe it was Microsoft and Carnegie Mellon that joined forces — and it did show that over-reliance on artificial intelligence can reduce our ability to think critically. So we need to make sure we are strengthening these skills as we start to move and work alongside artificial intelligence.
There was another study that was really helpful, that showed this in real life in the workforce. Entrepreneurs were given access to AI systems to help with their small businesses. The high-performing entrepreneurs that had deep critical thinking skills — AI supercharged their performance, because they knew the right questions to ask of the AI, and they knew how to apply the AI’s answers to their business. When the lower-performing entrepreneurs asked AI questions, they ended up doing worse, and it hurt the company — because they asked the wrong questions, they just gave up on the hard questions, and they didn’t know how to apply the material to their actual startup.
So we don’t want to build societies where we are 100% reliant on these systems. That’s something we have to think carefully about, both at an adult age and at a child age. And I think we’re already seeing it in terms of our attention spans and spelling. I’m sure there are a lot of people in this room — myself included — who feel like, “I spelled that word last week and now I have no idea how to spell it this week.”
We want to make sure we’re not short-circuiting the thinking in this age. So again, really centering deep problem solving, critical thinking, and deep learning.
Cognitive Strength and Radical Self-Dependence
NATALIE MONBIOT: There have been a number of studies — like the Carnegie Mellon and Microsoft one — that show when you outsource your cognitive work to an AI, you actually become cognitively weaker. And that seems extremely critical at a period in time where students are supposed to be honing their cognitive abilities.
But then it’s like — well, how do you engage with AI in a way that actually benefits you? Knowing that if you outsource the cognitive load and you’re not doing the cognitive work yourself, not only are you missing that moment, but you’re missing the insights that would otherwise live within you, settle within you, become part of who you are, and increase your body of knowledge, your resilience, your strength, and your expertise.
It seems like in this day and age — where it’s so uncertain what jobs will look like, what the future will look like — radical self-dependence is something we should be teaching. It would be great to hear a little bit about where we think that responsibility lies.
SINEAD BOVELL: I always hesitate when I think about responsibility to bring in parents, because everybody is coming from a different place and we can’t really control what happens in the home. That’s an entire other week of South By — making sure that all homes are equal and have access to the same things.
But in school, I think we really need to think about building confidence as a skill for kids, so they can continue to trust the questions they’re asking and their own ability to generate answers. It doesn’t mean, of course, that in a world where AI is a master of quantum computing, we want kids to compete with that — but we help them think more deeply about the questions they’re asking, and they develop a broad understanding of the answers that AI can give them. That is a fundamentally different society — one where we go from “What is the answer?” to “What is the question?” And that’s why it is part of that bigger, system-wide redesign.
I think centering confidence, encouraging kids to speak in front of classmates, engage in conversation — because that is also the interface of the future. Conversing with these AI systems is absolutely critical.
Preparing Kids for Anything, Not Just Jobs
SINEAD BOVELL: In terms of what the jobs of the future look like — nobody can really predict them. We can predict the jobs that are going to be automated; that’s much easier to see. But the same way nobody 20 years ago could have predicted that a social media manager was going to be vital to a company’s existence, most of the jobs we can’t really see.
We know there’s going to be some convergence of synthetic biology and artificial intelligence in space. But again, it’s about preparing kids for anything — not just trying to prepare them for specific jobs, because jobs are going to change, and that much we can guarantee.
That also means moving away from coupling identity to jobs. We have to move away from that entire philosophy — this idea that we learn, we work, we retire. That’s all changing. So instead, we encourage kids to lean into the problems they want to solve, the skills they want to adopt, and the amazing ways they want to change the world.
Tell kids about the robots and the AI systems they’ll be living with, and ask them what they want to do with it — versus coupling identity to jobs, because that is just going to end up in a crisis as we move into an entirely different type of world.
Teaching Metacognition and Human-Centered Design
NATALIE MONBIOT: Some of the skills we can teach children to prepare for this new future — people use the term “metacognition,” meaning how to think. It was interesting in a talk yesterday, where one educator was saying, “You can’t necessarily stop students from using ChatGPT. But something I do is say — okay, you used it. Show me your prompts. Show me the questions you asked. Show me how you pushed ChatGPT.” Because if you can ask good questions, if you can become a good communicator, and you know where you want the answer to go and can prompt in that direction — then that’s a skill. That’s a skill for today and for the future.
Another skill that came up is more of an experimental one. The New York Times recently covered a story using the term “vibe engineering” — this idea that almost anyone with the will and the passion to do it can basically build an app now. A lot of people are creating apps for themselves or for just a few people. And so one of the emerging skills discussed was around human-centered design. If anybody can design products for others, how do we get people thinking about — well, what would actually be good for others? That felt like another territory that was really rich.
SINEAD BOVELL: Yeah. I think centering the human experience in an age of advanced technologies is an investment we should definitely be doubling down on. And again, that does mean introducing kids to these ideas and these technologies — but then bringing it back to the human, to the core fundamentals. I think history, ethics, and philosophy are subjects that become more important the more advanced and technical our societies get.
Building Ethical Builders
NATALIE MONBIOT: And as you mentioned earlier, the computer scientists learning today are going to be the future tech tycoons of tomorrow. So what can we be teaching them to create more ethical AI and exponential technologies that are good for people — designed in a way that is good for society? I think that’s a really hopeful message — that we are in that moment now, where that next generation of builders gives us the opportunity to coach them, help them ask the right questions, and design for the good of society.
SINEAD BOVELL: Yeah, I don’t think I could have said it better myself.
The Evolving Role of the Educator
NATALIE MONBIOT: So, a bit of a segue into the question of ethics in this space more broadly. But actually, before we dive into that — what do we think about the role of the educator in all of this, and how does that shift? Let’s say in a great situation where you’ve got an AI tutor — an entire reimagined approach, as you mentioned — where you have an AI tutor giving adaptive, personalized learning. What is the role of the educator in all of that?
SINEAD BOVELL: I think that’s going to evolve the more these pilots and studies come through, and the different positioning that the educator takes. Whether that’s deep expertise in some areas — which will be vital — or whether that’s facilitating the right questions to ask, the right way to think about material, and the right way to think about learning — I think the role of the educator stays deeply coupled with kids understanding how to learn. And that is what education was supposed to be for: learning.
So I think it goes back to that. We’ve redesigned education to prepare people for work, and I think we need to move towards preparing people for life. But educators still stay central to that process. I don’t think many people would want to send their kids to a school with 95 robots and no people. I don’t think that’s the future any of us are aiming for.
NATALIE MONBIOT: Right. And in some of these very innovative models — like Alpha School, where it’s two hours of intensive, personalized learning with an AI tutor — the rest of the day is all about human connection, with teachers, instructors, and guides who help uncover the passion of each student, nurture it, and give them the confidence to deliver on it.
But with that, I wanted to touch on the ethics of this space a bit more.
AI Ethics, the Digital Divide, and the Future of Education
SINEAD BOVELL: Yeah. And this is something we have to really think carefully about. Artificial intelligence, data, and children — that’s already a deeply questionable intersection. And ethics appears in a few ways.
The first is, what data are these AI systems going to be collecting when it comes to children? Are parents aware and did they give consent? Or are we just kind of rushing AI tools into class? And what can be interpreted from the data that gets collected on children? We want to know where their stamina is on math. We don’t want to interpret other emotional cues unless we have figured out how to do that safely with parent consent. So that’s one area that I think we need to really understand.
The second is the strange way bias shows up in AI systems. We often think about facial recognition and the cases where we know it more intimately. But there are unique ways that AI can make predictions about you when you interact with it, and then change the level of advice that it gives you — or how well it performs for you — based on what it knows about you.
There was a study done using most of the most famous AI systems, and it showed that when you asked the AI systems about African Americans, it gave all great positive reviews. But when you gave the AI system an example of text that had more traditional African American English in it and asked the AI systems questions about that user, the AI system would say, “Oh, this person’s never going to go anywhere. I can’t even imagine a job for them. They’ll be in low wage jobs.”
Picture this: in education, the AI system detects somebody has a certain ethnic background or is a certain gender, and then gives the teacher worse feedback on that student in terms of assessments, or gives the student worse advice in problem solving, because it has already made a prediction that that student is not going to go anywhere in life. So these are the more subtle ways we have to apply foresight to ethics — or ethics and foresight — in academia.
And I’d say the final thing that we’re going to have to watch out for — and we saw this with social media after the fact — is the relationships kids are going to build with these systems. We are now giving kids access to an infinite, never-ending opportunity to engage with an imaginary friend. Something that is always on, can answer all of their questions. That is a recipe for a new type of addiction.
We kind of missed the boat on smartphones and now we’re all trying to get them back out of the classrooms. We can see this line of sight directly with AI systems and chatbots. And this isn’t, of course, all on educators. This has to come to tech companies — how we design these systems, age-gating them. But something to look out for is this kind of new addiction that might form between kids and chatbots, and that is not going to end up well.
Do our best to bring parents on board with that. So even if that’s at parent-teacher interviews, just casually saying, “Look out for the amount of time your kid spends chatting with a chatbot. I noticed they were a little bit more disengaged in class — that could be where it’s coming from.” This is another area that we have to apply foresight to. But we can see that line of sight happening quite clearly if we don’t intervene.
NATALIE MONBIOT: In a similar way that we’ve been talking about parents and learners having that visibility into their own data — their performance and how engaged they are with their work — should there be a case where everybody has that visibility into the relationships with these chatbots? Where do you think that line can be drawn? I feel like if there is that visibility, then people can be a little bit more relaxed. But then, is that—
SINEAD BOVELL: I would say that question needs to be answered by a psychiatrist and a psychologist. That is why these are multidisciplinary conversations. We need to bring everybody to the board. An addiction or a relationship with a chatbot shouldn’t be something that kids just download in the App Store. Psychologists, psychiatrists, doctors — I welcome you to this conversation, because we need your voice in it. It can’t just be happening out of Silicon Valley. It can’t just be left to parents to deal with on their own. Everybody needs to come to the table. We saw what happened with social media. We don’t have to do that social experiment again.
Audience Q&A: The Digital Divide and AI in Education
NATALIE MONBIOT: So well said. Okay, we’re going to take some questions here. This one’s from Rob. “How do you see AI increasing the digital divide, especially in underserved communities and developing nations? And how do we as leaders stop this cycle?”
SINEAD BOVELL: We can see that general purpose technologies build on each other. The communities that didn’t get equal access to electricity are the communities that are struggling with the digital divide. And then there’ll be an AI divide.
That is why that first pillar I discussed — AI as a hard skill, teaching kids how to use artificial intelligence, how to prompt it, how to use it safely — is vital. Because that may be the only opportunity kids get to access these AI systems. So that’s why it’s not about pushing AI out of schools. It’s about being very careful in adjusting how kids learn with AI, while making sure we build AI as a hard skill. That is absolutely vital in schools and in education.
When it comes to the broader world, this is a question that nation states are facing urgently — making sure there are things like sovereign AI, that every country gets access to computing power, and the opportunity to build the STEM skills within their population to adopt these technologies. That is a global conversation that’s also happening against a very geopolitically uncertain time. It’s a really important question, and unfortunately I wouldn’t be able to answer it in 30 seconds.
NATALIE MONBIOT: And just to add something small to that — in a way, could AI be introduced to everyone, because everyone’s got a smartphone regardless of their socioeconomic situation? If students aren’t taught how to use it and just over-rely on it, that could put some at a disadvantage.
Let’s take another question. “We are aware that AI cannot replace in-person instructors, but will it — and should it — replace the online asynchronous instructors in higher ed?”
SINEAD BOVELL: I’m not exactly sure what is meant by this question.
NATALIE MONBIOT: I guess how I interpret it is this: we know the value of in-person instruction and the need for that human connection. There are other modalities of learning — some is on-demand learning, and then you’ve got some which is live, synchronous but digital.
My thought on that is, when content is pre-recorded, maybe that’s not the best use of a teacher’s time — to have sat in front of a camera and read through all of that content themselves. Maybe that is a scenario where you can outsource that to an avatar or an AI in a different format that is proven to be more personalized and adaptive. And I would imagine that any human-to-human interaction focused on human connection is good, whether that’s in person or has to be online.
SINEAD BOVELL: And I think there’s also something interesting here — and we actually don’t know the answer to this question — but if you’re taking, say, a physics class online, what happens when the physics teacher is also now powered by these supercomputers? How does their perspective on physics change, and how do they see the world? And then getting access to that person in addition to the AI? I think the jury is still out on how that would unfold specifically as it relates to online learning.
NATALIE MONBIOT: Yeah, absolutely. I work with a company that creates AI twins for experts. What’s going to happen next is that experts will own their expertise, but they’re going to be able to enrich that expertise with real-time data that they choose to bring in. So, would you speak to that real expert, or would you speak to that expert’s AI twin? Well, in some cases it might be advantageous to speak to the AI twin. Even though the real, in-person experience allows for much more creative conversations, there might be scenarios where the AI twin is actually more valuable for certain contexts.
Prompting, Social Skills, and the Future of Human Thinking
SINEAD BOVELL: I think tackling this last one is interesting. “What are the pros and cons of developing skills for prompting when using AI? It is becoming critical for a career. How will it impact social skills?”
The pros are: the more you understand how to direct an artificial intelligence system, the better response and access to how the AI processes that data you’ll get. That I think is very helpful. Another pro is teaching people how to process what is in their mind and formulate that into a question that can lead to some response.
The con I see is that we end up refining all of our ideas and knowledge and optimizing them for algorithms — that we will become optimization engines for algorithms. And I don’t think that’s the world that we want to get into. There are unique advantages that artificial intelligence provides in how it interprets data, and there are unique advantages to how humans approach data. We don’t want to make our approach to thinking optimized for artificial intelligence. We want AI to be optimized for us.
I think this is going to be only a temporary challenge, as the nature and science of prompting is continuously evolving. Eventually it will turn to be much more conversational — the way you talk to your colleague, your teacher, or your friend. You’ll be able to engage with AI in that way. But that still means communication is absolutely vital. Understanding how to share your ideas, being able to vocalize them, refine the knowledge that you have in a way that’s easy to understand and interpret — not just for AI, but for the general public — will be vital in the future.
NATALIE MONBIOT: And I think as AIs become better at prompting themselves, where does the human go? The human needs to go deeper. They need to get more creative. What are these prompts even about? What is it that I’m trying to achieve? What could I achieve? I think that trajectory is a positive one for humans — how do you dig deeper into your human ingenuity? Because all of these things can be handled for you. I think that’s a net positive for using AI in the right way.
Education as a National Security Issue
SINEAD BOVELL: I know there’s a question that’s received the most likes, and I wonder why. “What occurs when the U.S. Department of Education is demolished, and how do we move forward to make sure all states receive equal AI education?”
I think this goes back to the first question. Investing in children’s future is an investment in national interest — they are fundamentally coupled. So if you want to talk economic strength, economic security, and national security, you are inherently talking about the success of the next generation.
I am not involved in how this is being dismantled. But I really hope we are prioritizing and centering children and their ability to self-actualize — to reach the maximum capabilities that they can — in the decisions that are made. Because that is going to be deeply coupled with the longevity and continuity of the state. They can’t be decoupled. And that’s why I say education is a national security issue. They need to be in the same room.
Closing Thoughts: The Non-Technical Skills That Will Define the Future
NATALIE MONBIOT: These are fantastic questions. I did want to leave just a couple of minutes for Sinead to share some final rounding thoughts on this last day of SXSW EDU on AI and the future of education.
SINEAD BOVELL: Well, first of all, just a major shout-out to teachers, because this is an incredibly complex time and they are dealing with the most prized asset on the planet, which is children. I think they don’t get enough credit for the moment that they’re navigating.
And I think something to remember: we’re going to continue to hear about advanced artificial intelligence systems, quantum computing, space, and all of these deeply technical advancements. But some of the most important skills have nothing to do with the technology.
Even for parents — it’s not being able to navigate an iPad passively at age 5 that will dictate whether your child will do well in the future. If you said, “My child doesn’t really like working on the iPad, but she’s reading four books a day, she loves her sports teams, she wants to spend too much time at the park” — I would say that child is going to thrive in the future.
So even though there’s a lot of pressure to adapt to this moment, remember: it is the non-technical skills that we need to be centering. Because we are preparing kids for a future we cannot see, which means we have to prepare them for anything, regardless of the way technology evolves.
NATALIE MONBIOT: And on that note, I think we will close. Thank you for being an absolutely fantastic audience.
The Identity Crisis of the Workforce
SINEAD BOVELL: What impact do you think AI will have on the workforce? And do you think we’re headed for an identity crisis?
And this is the question that’s fascinating about AI — “What else can I become?” Very few people have the courage to ask that question. Why? Because they look in the mirror in the morning and they see an engineer or a doctor. They don’t see a person.
If they’re not looking at artificial intelligence and asking, “What are we going to become with this technology?” — would you say it’s the beginning of the end for them?
Related Posts
- Neil deGrasse Tyson on UFO Files, Trump & Alien Existence (Transcript)
- Professor John Lennox: AI Is Humanity’s Attempt to Make God (Transcript)
- How Emotions Are Made: The Secret Life of the Brain – Dr Lisa Feldman Barrett (Transcript)
- The AI-Generated Intimacy Crisis – Bryony Cole (Transcript)
- The Problem With Uploading Your Consciousness @Cosmic Queries #105 (Transcript)
