Read the full transcript of Prof. Bharat N. Anand’s talk titled “Future.Ai: The New Classroom” at India Today Conclave 2025, Premiered Mar 8, 2025.
Listen to the audio version here:
TRANSCRIPT:
Introduction
[MODERATOR: We come now to what is, to my mind, one of the most important conversations of the moment. How do we educate our children at a time when machines can do what they can, when every single query you may have is available at a chat prompt, what skills do we equip our children with, what do we teach them? My wife is here. It’s something that we talk about all the time.
So what we did is we decided to call in one of the best experts in the world, someone who’s researching and thinking very deeply, in fact, even probably writing a book on the future of education. He’s one of the top professors at the Harvard Business School.
He’s made a lot of effort to fly down, especially to the India Today Conclave to be over here. Ladies and gentlemen, can we have a very warm round of applause as I welcome Professor Bharat Anand. He’s the Vice Provost for Advances in Learning, the Henry Byers Professor of Business Administration. He’s an expert on digital strategy, media, entertainment strategy, corporate strategy, and organizational change. And at this moment, he’s focusing very deeply on the future of education.
If you’ve young kids, if you’re middle-aged yourself wondering how to upskill yourself, this is a session you need to pay a lot of attention to. How we’re going to do this is this. Professor Anand will first make a presentation, and then I have lots of questions. I’m sure so do you about what do we teach our children, how do we train them, how should we go about this.
We’ll hand the stage to Professor Anand. This is a master class. You see, I don’t need to sit anywhere, and you can be cold called. Okay? Cold called is if you’re not paying attention to him and he sees because it’s around lunchtime and you seem to be looking at the door or looking around for food, he can cold call you and ask you some questions.
So be on standby for that. With that, Professor Anand, the stage is yours.]
The Impact of Generative AI on Education
[PROF. BHARAT ANAND: Thank you, Rahul. Good morning. I need some more energy.
Good morning. It’s a pleasure to be here with all of you today to talk about Gen AI and education. For those who don’t know what Gen AI is, imagine a person who’s often wrong but never in doubt. Now be honest with me. How many of you thought about your spouse?
I did not. Okay? But that’s Gen AI. And what I want to talk about is what happens when we have large language models like ChatGPT and generative AI intersect with institutions like Harvard, where I sit, and I’ve been there for the last twenty-seven years, currently overseeing teaching and learning for the university.
Let me just ask you a question. How many of you think in the next five to ten years, generative AI will have a very large impact on education? Just raise your hands. How many would say a moderate impact? So we have a few. How many would say little to no impact? Pretty much none.
Okay. Let me come back to this. Here’s a chart showing the rise of technologies and the time it took for different technologies to reach fifty percent penetration in the US economy. So if you look at computers, it actually took twenty years to reach about thirty percent penetration. Radio, it took about twenty years to reach half the population. TV, about twelve years. Smartphones, about seven years. Smart speakers, four years, and chatbots about two and a half years. This is part of the reason we’re talking about this today.
Challenging Common Assumptions About AI in Education
Here’s what we know so far about Gen AI and education.
First, the transformative potential stems from its intelligence. That’s the I in AI. Okay? Secondly, as prudent educators, we should wait until the output is smart enough and gets better and is less prone to hallucinations or wrong answers. Third, given the state of where bot tutors are, it’s unlikely, I think many believe, that it’s going to be ultimately as good as the best active learning teachers who have refined their craft over many, many years and decades. Fourth, and Sal Khan talks about this, this is likely to ultimately level the playing field in education. And finally, the best thing we can do is to make sure that we secure access to everyone and let them experiment.
Before you take a screenshot of this, don’t, because I’m going to argue all of this is wrong. Now that I hopefully have your attention, I’m going to spend the next ten minutes arguing why.
Access, Not Intelligence, Is the Key
Let’s actually start with the first one, which is the transformative potential stems from how intelligent the output is.
I would argue, and in fact we just heard this from the previous speaker, we’ve been actually experiencing AI for seventy years. Machine learning for upwards of fifty years, deep learning for thirty years, transformers for seven to eight years. This has been an improvement gradually over time. There were some discrete changes recently, but the fundamental reason why this has taken off, I would argue, has less to do with the discrete improvements in intelligence two years ago as opposed to the improvement in access or the interface that we have with the intelligence.
What do I mean by that? I’m going to give you the one minute history of human communication. So we started out sitting around campfires, talking to each other. From there, we started writing pictures on the walls. That was graphics. From there, we started writing scrolls and books. That was formal text. And finally, the pinnacle of human to human communication, was ones and zeros, and that’s mathematics. That’s the evolution of human to human communication.
The evolution of human to computer communication has gone exactly in the opposite direction, which is sixty, seventy years ago starting with punch cards, ones and zeros, for those of you old enough might remember that. Then we moved to things like DOS prompts, commands that we had to input.
By the way, and this is the fundamental thing, the big difference between Windows 1.0 and Windows 3.0, functionally, were almost identical. The big difference was the interface, meaning we moved to a graphical user interface and suddenly seven-year-old kids could be using computers. That I think is more similar to the revolution we’re seeing now, which is AI for a long time was the province of computer programmers, software engineers, tech experts. With ChatGPT, it basically became available to every one of us on the planet through a simple search bar. That’s basically the reason for the revolution.
Where is this going? Probably towards just audio. And I don’t know if anyone can guess what’s the next evolution of this in terms of communication. Neural, reading emotions, you might argue basically us grunting and shaking our arms, formally that would be called the Apple Vision Pro. You could argue we are regressing as a species.
On the other hand, you could argue that in fact what’s happening is that the distance between humans and computers is fundamentally shrinking. So that’s the first thing I just want to say, is fundamentally this is about access.
Implications for Expertise and Organizations
What does this mean? It means that does anyone know what this is? This is Photoshop. There’s a lot of people who spend one year, two years, four years trying to master this graphics design. Arguably, we don’t need this kind of expertise anymore. We can simply get it by communicating directly in natural language with computers now.
This, for those of you who don’t know, is EPIC. It’s a medical software record. My wife, who’s a cardiologist, does not like this. She spends two hours every single day filling in notes on these software records. You could argue sometime in the near future that communication will become much simpler.
By the way, one of the things to keep in mind is for every one of you sitting in organizations, and by the way, this is a happy organization, to think about what this is likely to do to the org structure. If you think about the bottom of this organization, there’s people who have expertise in different kinds of software. Okay? Some expertise in Photoshop, some in Concur, some in different kinds of software. You could argue there’s going to be consolidation within those functions.
The middle managers who used to oversee all these software experts, it’s likely we’re going to see shrinkage there. In fact, you could argue all the way that the person at the top could in fact do sales, graphics design, design, marketing, everything by just interacting directly with the computer.
It’s not a stretch to say, and some people predict this, that the first one-person billion-dollar company is going to be likely to be born pretty soon. And people are already working on this. I would urge you to think about this question, which is what does this mean for your expertise in organizations or the organizations you run? Because that’s going to have big implications for how you run these organizations.
So that’s the first point, which is fundamentally, this is not about intelligence, but about how it’s accessed. The implication of this is more people will be able to use more computers for specialized purposes, but it doesn’t necessarily mean it’s likely to be the same people. That’s the first thing.
Don’t Wait for Perfection
Second, I think we all look at these hallucinations and we say, let’s wait. Let’s wait till it gets better. By the way, that begs the question that hallucinations are a fundamental intrinsic property of generative AI because they are probabilistic models. But I would go further and say, even when AI capabilities fall far short and impair the human value proposition, there’s still a reason to adopt it.
Why do I say that? I’m a strategist. As strategists, we think of two sides of the equation. One is the benefits side, what are customers willing to pay? The other is the cost or the time side. Even if there’s no improvement in intelligence, simply because of cost and time savings, there might be massive benefits to trying to adopt this.
So the metaphor I want you to think about is the following company. Has anyone flown Ryanair? What is the experience like, Ishan? Efficient. Efficient. By the way, when I ask my students this, they often say, I hate it every single time I fly. And of course, that begs the question, why are you repeatedly flying it?
This is an airline, like most low-cost airlines. It doesn’t offer any food on board, no seat selection, you’ve got to walk to the tarmac, you’ve got to pay extra for bags, no frequent flyers, no lounges and this is the most profitable airline in Europe for the last thirty years running. Why? It’s not providing a better product, it’s saving cost. That’s the metaphor I would love for you to keep in mind when you think about generative AI and its potential.
A Strategic Framework for AI Adoption
So let me just walk through this and sorry as a strategist, I have to put up a two-by-two matrix at some point. There’s two dimensions here I’d love for you to think about. The first is what is the data that we’re inputting into these large language models. And the data could be explicit in the form of files, like text files, numbers, etcetera. That’s explicit data. Or it could be tacit knowledge, meaning creative judgment, etcetera.
But the second dimension is as important, which is what’s the cost of making an error from the output? Not the prediction error, what’s the cost of something going wrong? In some cases, it could be low. In some cases, it could be high.
So let’s actually talk through some examples. First is explicit data, low cost of errors. That’s high volume customer support. For the last thirty years, this thing has been automated. By the way, that trajectory is likely to continue. Why do I say that? It is virtually impossible for any company to have people manning the phones to talk to one hundred thousand customers. This is the direction where it’s going. Even if we have two percent or three percent or four percent errors, it’s okay. It’s simply much more efficient to respond to customers in this way. So that’s one dimension.
Second dimension is drafting legal agreements. For all the lawyers in the room, just watch out. It’s going to be much, much easier. It already is to draft legal agreements. But we can’t rely on generative AI to simply give us this thing without checking it. Some of you may have heard of that lawyer who did that a couple of years ago. Basically, he didn’t review the agreements. There were some errors. He got fired. So we might have human in the loop. You don’t want to basically take the output at face value. Okay? Because the cost of making an error is simply too high.
Third, on the top right, is creative skills. Design, marketing, copywriting, these are things where it’s hard to evaluate what’s truly better or worse. And so in some sense, the design outputs we get, the social media content we get as suggestions from generative AI, pretty good. The cost of making an error there, not that high.
And finally, we get to the top right where we want to be very, very careful because this is like large enterprise software integration. You don’t want to go there pretty soon. Okay? Or designing an aircraft.
Applications in Education
Now what does it mean for education? Let’s actually play this out. I’m going to use our example as an illustration. If I’m sitting at Harvard, basically we get, when we open up the website, about ten thousand applications in the first couple months for admission. Maybe thirty thousand people who look at the website. By the way, if they have questions, it’s impossible to speak personally and individually to everyone who has a question.
This is beautiful for chatbots to be able to simply respond. Again, if there’s an error in the response, it’s okay. I mean, these are people who are simply thinking about applying and they might find information in other ways. Secondly, legal contracts with food contractors. We want to be careful about human in the loop.
Implementing AI in Education and Beyond
Thirdly, designing social media content when we go to the top left. This is something we can do far more efficiently today with generative AI. And finally, I can assure you we’re not going to be using this anytime soon for hiring faculty or disciplinary actions against students.
By the way, think about this not just for your organization, think about it for you individually. So if I was to do that, responding to emails, I get a lot of emails every day. Most of these emails are things that are very standard. Professor, when are your office hours? Where is the syllabus posted? By the way, even in other cases where students ask questions like, “Professor, I have two offers, one from McKinsey, one from Boston Consulting Group.” The cost of an error is not that high in my response. You’ll be okay. Or “I’m trying to decide whether to go to Microsoft or Amazon.” You’ll be okay. Okay? I’m just kidding, by the way.
I can assure you I respond to all those emails individually. But you get the point. Writing a case study, it takes us nine months to write these famous Harvard Business School case studies. The head of the MBA program last year said, “I want to teach a case on Silicon Valley Bank tomorrow.” What he did was go to ChatGPT, said, “Write a case like Harvard Business School with these five sections, financial information, competitive information, regulatory information,” it spits it out.
He then said, “Please tweak the information. Give me this data on the financials. Talk about these competitors.” He iterated. It kept spitting out information. From beginning to end, he had the case study complete in seventy-one minutes. If you’re not scared, by the way, we are, about what the potential here is.
Brainstorming a slide for teaching. There’s a couple of slides in this talk where I took some pictures and I started trying to resize it. PowerPoint designers simply threw up some suggestions saying, “Here’s how you might want to do it” in one second. It didn’t take me ten, fifteen minutes to try and redesign these slides. A beautiful application for using this. And finally, thinking about exactly how I teach in the classroom or my research direction, I’m not going there anywhere soon.
Key Insights About AI Implementation
I’d love for you to think about a couple of things from this simple framework:
Number one, we are obsessed with talking about prediction errors from large language models. I think the more relevant question is the cost of making these errors. Meaning, in some cases, the prediction error might be thirty percent, but if the cost of error is zero, it’s okay to adopt it. In other cases, prediction errors might be only one percent, but the cost of failure is very high, you want to stay away. So stop thinking about prediction errors. Let’s start thinking about the cost of errors for organizations.
Secondly, if you notice what I’ve done, I’ve broken down the analysis from thinking about industries. What’s the impact of AI on banking or education or retail into jobs, and in fact, gone a step further and broken it down into tasks. So don’t ask the question of what is AI going to do to me. Ask the question, which are the tasks that I can actually automate and which are the tasks I don’t want to touch.
And the third is, I don’t know about you, in my LinkedIn feed, every single day, I get new information about the latest AI models and where the intelligence trajectory is going, getting better and better. That’s basically about the top right sell. I would say that’s a red herring for most organizations because basically there’s three other cells where you can adopt it right now and today with human in the loop. Okay? So that’s just something I’d love for you to think about.
By the way, we did this with Harvard faculty where we interviewed thirty-five Harvard faculty who were using Gen AI deeply in their classrooms. Those videos are up on the web. If you just type in Google, “generative AI faculty voices Harvard,” you see all these videos.
AI Applications in the Classroom
Here are some examples of what they were doing:
A faculty copilot chatbot. It’s almost like a teaching assistant that simulates the faculty that answers simple questions and is available to you twenty-four seven.
Secondly, one of the things that we as faculty spend a lot of time thinking about is designing the tests and the quizzes and the assessments every year. And we’ve got to make it fresh because we know our students probably have access to last year’s quizzes. Large language models are basically spitting this out in a couple of minutes. And of course, as individuals, we would refine it. We’re not going to just take it at face value. We refine it. We look at it, but it’s saving a lot of time.
Third, when we’re giving lectures, students often have questions which they’re too scared to ask live in front of three hundred students. Oh, it’s beautiful if you can simply type in the questions, have Gen AI summarize the questions and put it up on a board, the faculty know exactly what the sentiment is in the classroom and where students are getting confused.
By the way, notice one thing about all these examples? Every single one of them is about automating the mundane. It’s not about saying, let’s rely on the intelligence that’s getting better and better. It’s the left column of that framework I was talking about. So these are ways that it’s being used nowadays in our classrooms.
AI vs. Human Tutors
The third thing, this premise that bot tutors are unlikely to be as good as the best instructors. We had a few colleagues at Harvard who tested this for a course called Physical Sciences Two. This is one of the most popular courses. And by the way, the instructors are very good in that course. They’ve been refining active learning teaching methods for many years.
What they did as an experiment was say, for half the students every week, we’ll give them access to the human tutors. For the other half, give them access to an AI bot. And by the way, the nice thing about the experiment is they flipped that every single week. So some people always had access to the humans, some people had access to the AI for that week, but then they flipped the next week. Every single week, they tested your mastery of the content during that week.
And what was interesting was the scores of the students using the AI bots were higher than with the human tutors. And these are tutors who’ve been refining their craft year in and year out. What was even more surprising is engagement was higher. By the way, this is a first experiment. The only point is we better take this seriously.
Will AI Level the Playing Field in Education?
Next, will it level the playing field in education part of the premises? Because everyone has access, any individual in a village, a low income area is basically going to have access to the same technology as those who are in elite universities, and this is going to level everything.
There’s a possibility it might go exactly the other way, which is the benefits might accrue disproportionately to those who already have domain expertise. Why do I say this? Think about a simple example. When you have knowledge of a subject and you start using generative AI or ChatGPT, the way you interact with it, asking it prompts, follow on prompts, you’re basically using your judgment to filter out what’s useful and what’s not useful. If I didn’t know anything about the subject, I basically don’t know what I don’t know.
So in some sense, the prompts are garbage in, garbage out. By the way, this is being shown in different studies. There was a meta analysis summarized by The Economist a couple of weeks ago, where they basically talk about different kinds of studies that are showing for certain domains and expertise, the gap between high performance, high knowledge workers and no knowledge workers is actually increasing. We better take this seriously.
Lessons from Online Education
Why? And this is not the first time this has happened. Twelve years ago, there was a big revolution in online education. Harvard and MIT got together, created a platform called edX, where we offered free online courses to anyone in the world. By the way, they still exist. If you want to take a course from Harvard for free, pay one hundred dollars for a certificate, you can get it on virtually every subject.
What happened as a result? EdX reached thirty-five million learners, as did Coursera and Udacity and other platforms. What was beautiful is roughly three thousand courses. The challenge was completion rates less than five percent. Why?
By the way, if you’re used to a boring lecture in the classroom, the boring lecture online is ten times worse. So there’s virtually no engagement. People take a long time to complete or may not complete. But here’s what’s interesting. The vast majority, seventy-five percent of those who actually completed these courses already had college degrees, meaning the educated rich were getting richer.
Now think about that. It’s very sobering. Why is that? Because those are people used to curiosity, intrinsic motivation. By the way, they are used to boring lectures. They’ve gone to college. But this has big implications for how we think about the digital divide. So I just want to keep that in your mind.
The Future Role of Teachers
And the last thing I just want to say is rather than going out and trying to create tutor bots for as many courses as possible, I think what we really need to do is have a strategic conversation about what’s the role and purpose of teachers given the way the technology is proceeding.
The one thing I will say here is that when we think about what we learned in school, okay, think back, think back many, many years. We learned many things. Tell me honestly how many of you have used geometry proofs since you graduated from high school? Three people. Why did we learn state capitals and world capitals of every single country?
Foreign languages. And by the way, this is Italian. Devi is not a goddess. Devi in Italian says, “you must.” Okay? They have similarities. Why did we learn foreign languages?
When we think about business concepts in our curriculum, I often get my students who come back ten years later and say, “Those two years were the most transformative years of my life.” I often ask them, “What are the three most important concepts you learned?” They said, “We have no idea.” I’m like, “No. No. Okay. Give me one.” “No. No. We have no idea.” I’m like, “So why do you say this was transformative?”
The point simply being they’re saying, this was transformative not because of the particular content, but because of the way we were learning. We were forced to make decisions in real time. We were listening to others. We were communicating.
What are they saying? They’re saying that the real purpose of case method was listening and communication. The real purpose of proofs was understanding logic. The real purpose of memorizing state capitals was refining your memory.
By the way, that example there is the poem “If” by Rudyard Kipling. Some of you might remember this from school. It goes something like this: “If you can keep your head when all about you are losing theirs and blaming it on you.” I have PTSD because my nephew, when he was reciting this to me preparing for his tenth grade exams, I was like, “What the heck are you doing?” But it was basically refining memory skills. And for foreign languages, it was just learning cultures and syntax.
When we go deep down and think about what we were actually teaching, I think that probably gives us a little more hope. Because it means it doesn’t matter if some of these things are probably accessible through GenAI. When calculators came along, we thought it’s going to destroy math skills. We’re still teaching math, thankfully, fifty years later, and it’s pretty good. So this is something that I think is going to be an important strategic conversation.
This is the slide I’d love for you to keep in mind, which is basically everything I’ve just said. If you want to take a screenshot, this is the slide to take a screenshot. Thank you all so much and I hope to be in touch.
Q&A Session
[MODERATOR:] At HBS, I took Professor Anand’s class on economics for managers. Listening to him feels like being back in class. Fortunately, he didn’t cold call anyone, which is terrific. So thank you for that.
Now I have a few questions. We’ve got young children and you’ve got so much of knowledge available now on chat prompts. What’s your advice to everyone who’s got young children now wondering about what should they be teaching their children so that when they grow up and when we don’t know what the actual capabilities of these machines are, then what they’ve learned is still useful.
[PROF. BHARAT N. ANAND:] How old are your kids, Rahul?
[MODERATOR:] So my son is nine and my daughter is five.
[PROF. BHARAT N. ANAND:] What are you telling them right now? Now I want to learn from you. And I know we are telling them a lot of stuff, good, bad, ugly, I don’t know. I’m trying to refine that and give them a framework of what we should be telling.
So there’s two things. So I think first of all, this is probably one of the most common questions I get. By the way, it’s really interesting that the tech experts, and there was an article in the Wall Street Journal about this ten days ago, are basically telling their kids, don’t learn computer science. That skill, at least basic computer programming, is gone. Advanced computer science, advanced data analysis, if you want to do that, that’s going to be fine.
What are they telling their kids to learn? They’re telling their kids to learn how to teach dance. They’re telling their kids to learn how to do plumbing. They’re telling their kids to learn about the humanities. Why are they saying that? Implicitly, they’re saying, what are those skill sets that are robust to machine intelligence? Now I will say it is virtually impossible to predict that given the pace at which this improvement is occurring.
Future.AI: The New Classroom
I probably have a slightly different kind of answer. By the way, my daughter is majoring in psychology without me telling her anything. So the kids, I think, know basically where this is going. But the one thing I’ll say, Rahul, is I don’t know when you started out college, what were you majoring in?
[RAHUL:] Journalism.
[BHARAT ANAND:] Journalism. You started out with journalism. Okay. That’s enlightened. I started out doing chemistry.
And then the reason I switched to economics was probably like many of you. There was one teacher who inspired me, and that’s what made me switch. And I would say to kids, follow the teachers who inspire you. And the reason is if you can get inspired and passionate about a subject, that’s going to build something that’s going to be a skill that would last all your life, which is curiosity, which is intrinsic motivation. We talked about in the last session, this is no longer about learning episodically, it’s about learning lifelong.
And that’s, I think, going to be the most important step.
[MODERATOR:] In the way that Indian families operate, and as do so many Asian families too, parents want to equip their children with the skills that are likely to be most useful when they grow up. So it used to be, say, engineering and doctors back in the day, then IT a few years ago. So if you were looking ahead, what do you think the children should be learning so that they acquire skills which are useful in the job market years down?
[BHARAT ANAND:] I think that’s honestly being too instrumental. As I said, ten years ago, a lot of my students were talking to me and saying, what should I major in? I never told them computer science. If I told them that, I would have regretted it. But I genuinely mean this.
That’s looking at the things too narrowly. What I would say is think about things like creativity, judgment, human emotion, empathy, psychology. Those are things that are going to be fundamentally important regardless of where computers are going. By the way, you can get those skills through various subjects. It doesn’t matter. It’s not a one-to-one mapping between those skills and a particular topic or discipline in the area. This is partly why I’m saying, really think about where their passion is.
[MODERATOR:] How do we teach our children how to think? Because everything is available on Google, Copilot, Chat GPT. You can just Chat GPT. So joining the dots, giving them a framework to be able to interpret, analyze and think, how do you tell them that?
[BHARAT ANAND:] The easiest thing is Google. Yes. So it’s a good question. Just two things on that.
The first is, there was an interesting study done by colleagues at MIT recently, where they had groups of students, and they were asked to undertake a particular task or learn about a topic. Some students were given AI chatbots. Some students were only given Google search with no AI. What they found is the students with access to AI intelligence learned the material much faster. But when it came time to apply it on a separate test, which was different from the first one, they found it much harder.
The students who learned the material through Google search with no other access took longer, but they did much better on those tests. Why is that? Part of the issue is learning is not simple. It takes effort. And so part of the issue is you can’t compress that effort. The harder it is to learn something, the more likely you’ll remember it for longer periods of time.
And so I think for me, big implication is when I tell my students, look, all these technologies are available. It depends on how you use it. My basic approach to them is just saying, study. Because if you get domain expertise, you will be able to use these tools in a much more powerful way later on. So in some sense, this goes back to the notion of agency. It’s like we can be lazy with tools and technologies or we can be smart. It’s all entirely up to you. But this is my advice.
[MODERATOR:] You know, some of my friends in Silicon Valley have the toughest controls on their children when it comes to devices. You know, we look at how much time our children can spend on their iPads or TV. They’re far more lenient. And they’re the guys who are actually in the middle of the devices, and they’re developing them and they know the dangerous side effects. Now those devices are also the repository of knowledge, which is where you can learn so much from.
So as an educator, every parent has his own take and how much time children can spend. But as an educator, how do you look at this device addiction, spending far more time picking up some knowledge but also wasting a lot of time?
[BHARAT ANAND:] I think, I mean, there’s a nuance here, which is basically what they’re doing is not saying don’t use devices. They’re saying don’t use social media.
And this goes back again to one of the things we were talking about earlier. We have gone through a decade where things like misinformation, disinformation and so on, there is no good solution as far as we know today. There’s also various other kinds of habits and so on that are going to improve. That’s partly what they’re saying stay away from. They’re not saying stay away from computers. We can’t do that. And in fact, you don’t want to do that. But there’s a nuance in terms of how we interact with tools and computers that we just want to keep in mind when we think about God dreams. Right?
[MODERATOR:] Are you seeing your students getting more and more obsessed with their devices? And how does that impact? And what are you trying to do to get them to socialize more, to spend more time with each other and not be stuck on their phones or not?
[BHARAT ANAND:] A very interesting question. So in some sense, last year, we had a conference at Harvard. We had four hundred people from our community attend the conference. And some of our colleagues were saying, we should have a policy of laptops down. No laptops in class, take out our devices. I was coming in for a session right afterwards, but part of the reason I wanted them to take out their mobile phones was I had two or three polls during my lecture where I wanted them to give me their input. So I said, mobile phones out.
Okay? And this was sort of crazy. But the story illustrates something interesting, which is these devices for certain things can be really powerful. It can turn a passive learning modality into an active learning modality where every single person is participating. We don’t want to take that away.
What we want to try and deal with is people playing games while you’re lecturing. Now by the way, me personally, I just put it on myself. If I’m not exciting enough or energizing enough for my students to be engaged, use your mobile phones. That’s not me.
[MODERATOR:] No, no, they’re quite engaged. Show of hands, how many felt engaged during the session and how many were like, okay. So which is why agentic AI and chatbots can never do what professors can, right? So I’ll take some questions. Kali has a question. Kali, go ahead.
[AUDIENCE QUESTION:] Hi, professor. You mentioned that one of the things that we should work on to teach our children is empathy. How do you actually teach empathy in our formal education system? Or does this just go back to then parents and families?
[BHARAT ANAND:] It’s a hard question. In fact, this is, by the way, one of the most important issues we’re facing today on campuses. It’s related in part even in higher education, not just younger kids. When we talk about difficult conversations on campus, part of the reason we’re facing those issues is because people are intransigent. It’s like, I don’t care what you say. I’m not going to change my mind.
One of the things we introduced a couple of years ago on the Harvard application for undergraduate is a question that says, have you ever changed your mind when discussing something with anyone else? Here’s something to that effect. But that’s basically saying how open minded are we. That’s one version of empathy.
There’s many other dimensions. I think part of the challenge is that we don’t teach that in schools. Right? We don’t teach that formally in schools, which is partly why there’s this whole wave now of schools, not just in other countries in India, which are starting to talk about how do we teach the second curriculum, the hidden curriculum. How do we teach those social and emotional skills, the book of life, so to speak.
And I think I mean, it’s not rocket science to say this. It starts at home. Right? Like, that’s basically what we do with our kids every single day. But that’s something that’s, I think, going to become fundamentally more important, partly because of the reasons of what I talked about.
[MODERATOR:] Doctor Sanjeev Bhagai has a question. Okay, see lots of hands going up. Yes, Doctor Bhagai.
[AUDIENCE QUESTION:] Wonderful listening to you. Just with regards to AI and technology, I’ve always said that AI and digital technology is not an expenditure, it’s actually an investment. So very quickly, if you allow me just sixty seconds, in healthcare, it gives you better clinical outcomes. It has decreased from number one cause of death as hospital acquired infections in many hospital chains as practically less than one percent. So it gives you a safer outcome.
It gives you a better patient experience. The turnaround of the bed strength is a lot quicker. And more importantly, it gives you better operational excellence. So all the hospitals as far as medical facilities are concerned who have not embraced it as yet will find it difficult to operate in the present environment. What AI and digital technology has made us learn as doctors is that data is the new gold.
If you don’t analyze data, if you don’t see what your results are, if you don’t see where your clinical outcomes are, then you can’t go forward. So AI is what is in the future for us, all of us.
[UNIDENTIFIED SPEAKER:] That’s more in the form of an observation.
[BHARAT ANAND:] Elaborate on that in two ways. One is, I think I would just go back and useful to contextualize AI, right? Like right now, we often get obsessed by the latest technology. When we think about upskilling, reskilling in education, there’s a revolution that started a decade ago. As I alluded to, there’s basically three thousand courses available to all of you today on any subject. So the notion of let’s wait for AI, no. No. No. It’s already there.
My father-in-law, who’s ninety-two years old, during COVID, he said, Bharat, what should I do? I said, we have all these courses from Harvard available. In the last two years or three years, he’s completed thirty-five courses.
[MODERATOR:] Wow. Okay. At the age of ninety-two. Wow.
[BHARAT ANAND:] By the way, he’s paid zero dollars for that because he said, I don’t need a certificate. And so I told him, you’re the reason we have a business model problem. Okay? So that’s one aspect. The second aspect is sort of thinking about where you’re going.
I think you’re exactly right, Sanjeev, which is every organization is going to have low hanging fruit. The one thing I’d just caution is there’s going to be a paradox of access. Meaning, if every organization, every one of your peers has access to the same technology as you, it’s going to be harder for you to maintain competitive advantage. That’s a fundamental question. Okay?
This is just a basic observation. So I just want to sort of mention that, but you’re absolutely right about the low hanging fruit in medicine and health care.
[MODERATOR:] Okay. Toby Walsh has a question of an observation and then we I get lots of hands up, okay? I don’t frankly know what to do because we’re also out of time. So let this just be where we continue.
[AUDIENCE QUESTION:] One of the greater challenges, especially in higher education, is the cost has gone through the roof. Are you optimistic that AI is going to be able to turn that around?
[BHARAT ANAND:] So again, I’ll just go back to what’s happened in the last decade. As I said, you can now get access to credentials and certificates at a minimal cost compared to the cost of getting a degree. Okay? Just to put it in perspective, we have seventeen thousand degree students every year who come to Harvard. They are paying a lot of money. Those who need financial aid, get financial aid. By the way, can anyone guess how many students we have touched over the last decade?
Ten times, hundred times that. It’s about fifteen million. That is not a story we publicize, but that’s a story about the number of students who’ve actually taken a Harvard course or enrolled in a Harvard course. So in some sense, I think where we are today is the marginal cost of providing education is very, very low. What we need for that is not incremental improvement on the existing model.
We need to basically break it apart and say how do we put it back together again in a way that makes sense for everyone. There’s an organization that we just started at Harvard called Axsome jointly with MIT with the endowment from the sale of the edX platform whose only function is to increase access and equity in education. And by the way, their focus is on forty million people in America who start college but never complete it. Not just because of cost for many other reasons, right? In some sense, the potential to reduce the cost is massive, but it’s going to require leadership and strategy.
[MODERATOR:] This gentleman here has a question. Can someone just take the mic to him, please?
[AUDIENCE QUESTION:] So earlier it was okay, use AI and it will summarize and help you in productivity. But with the latest OpenAI models like O3 mini and all that, they are doing reasoning which is much better than humans. So the people who are not using it are at a disadvantage.
The Role of AI in Education
[PROF. BHARAT N. ANAND:] So isn’t it right that the students use AI and be familiar with it and be up to speed with that rather than not using it and be at a disadvantage to other students?
[UNIDENTIFIED SPEAKER:] Yeah. Absolutely. There’s no question about that.
[PROF. BHARAT N. ANAND:] By the way, I sit at Harvard overseeing the generative AI task force for teaching and learning, and we have seventeen faculty.
The most interesting conversations I’ve had about adoption are with our students. Now when we understand their behavior, it just throws up things that we wouldn’t even have thought about. I’ll ask you one question. We had a sandbox that we created for the entire Harvard community, which was a safe and secure sandbox, giving them access to large language models as opposed to using public open AI. The adoption rate amongst our faculty was about thirty, thirty-five percent in the first year.
What do you think the adoption rate was amongst our students? It was about five percent. So we were surprised. When we went to them, we said, what’s going on? Are you familiar with the Sandbox?
They said, yeah, we are. We said, are you using it? They said, no. We said, you using AI in any way? Yeah.
Yeah. We have access to ChatGPT. We have our own private accounts there. So we’re like, wait. Wait.
Why are you not using the secure Harvard sandbox? What do think their answer was? They said, why would we use something where you can see what we’re inputting? Now by the way, as faculty members, if the number one question we talk about with generative AI is, oh, we’re worried about cheating at assessments, our students are listening to us. They’re like, oh, if that’s what you’re worried about, we’re not coming anywhere close to you.
Okay? So part of the point is the students are far ahead of us in terms of using this. They’re using it to save time. They’re using it for engaging in deep learning. We better understand that ourselves to figure out what we can do.
Risks and Challenges of AI in Education
[MODERATOR:] Jaini, at you. Brilliant presentation. Just wanted to understand one side of the spectrum, have all the positives. What’s on the other side? What risks do you think is there on the other side, starts coding on its own, gets out of hand?
Is that a possibility? What is the possibility of—
[PROF. BHARAT N. ANAND:] So the risks are the things I talked about towards the end, okay, which is number one, we put our head in the sand as institutions and we don’t take this seriously. That’s the first risk. The second risk is lazy learning, the way I would call it. Now again, that’s agency.
It partly depends on you as a student. Do I want to be lazy? Do I not want to be lazy? The third risk is everything we were talking about in the previous session with respect to misinformation, disinformation. The fourth big risk is asking the fundamental question, what’s our role as teachers?
And I’ll just share one anecdote in closing. There’s a colleague at another school who called me and said, my students have stopped reading the cases. They’re basically inputting the assignment questions into generative AI. And by the way, they’re so smart. They’re saying, give me a quirky answer I can use in class.
Okay? The assessments are compromised. And get this, the faculty have stopped reading cases. They’re inputting the cases and basically saying, give me the teaching plan. That’s the downside.
[MODERATOR:] You know, we met on a flight from Delhi to Mumbai, and we had a long conversation about the future of education. You’ve been able to, in the past forty-five minutes, recreate the magic of that conversation here on stage. Can we have a very warm round of applause to the professor? Thank you. For making the effort of coming here and for joining us and for delivering this master class.
[PROF. BHARAT N. ANAND:] Thank you. Thank you. Absolute pleasure. Thank you so much. Thank you.
Related Posts
- How to Teach Students to Write With AI, Not By It
- Why Simple PowerPoints Teach Better Than Flashy Ones
- Transcript: John Mearsheimer Addresses European Parliament on “Europe’s Bleak Future”
- How the AI Revolution Shapes Higher Education in an Uncertain World
- The Case For Making Art When The World Is On Fire: Amie McNee (Transcript)
