Skip to content
Home » Transcript of Artificial Intelligence – Past, Present, Future: Prof. W. Eric Grimson

Transcript of Artificial Intelligence – Past, Present, Future: Prof. W. Eric Grimson

Read the full transcript of Prof. W. Eric L. Grimson’s lecture titled “Artificial Intelligence – Past, Present, Future” at 2025 MIT Bangkok Symposium on Feb 25, 2025.

Listen to the audio version here:

TRANSCRIPT:

INTRODUCER: And now, to present our keynote talk, I’m privileged to introduce MIT’s Chancellor for Academic Advancement, Eric Grimson, the Bernard M. Gordon Professor of Medical Engineering.

Introduction

PROF. W. ERIC L. GRIMSON: Good morning. Nice to have you here. Thank you for joining us today for what I hope is an informative, engaging, and interesting conversation about AI and its impact on almost every aspect of your life.

I’m going to start by saying what I’d like to do in this talk is give you a little bit of the history of AI, especially MIT’s role in it, a little bit of a review of what AI systems do. I know many of you know this well, but it’s worth reminding you of what are the pieces involved in it. And then talk about what MIT is doing to embed AI throughout the research at the Institute and to push it forward into the future. So that’s my goal.

AI Is Everywhere

AI is everywhere. In the United States, if you watch television and you look at ads, it looks like any company that can spell AI says they’re doing it. And most of them are. But it doesn’t matter what you pick, whether it’s finance, it’s health, it’s transportation, it’s commerce, it’s security, AI is here and it’s having an impact.

It’s worth reminding ourselves of what is an AI system. The standard definition from computer science is that it’s intelligence executed by a machine. AI is a rational agent that perceives its environment, gathers information, and takes actions in order to try and maximize success at a particular goal. It’s the fundamental of what AI does.

Often people will say AI is exhibited by a machine when it does something that we would associate with a human, hopefully good things that a human does and not mistakes that the machine makes. And so that involves problem solving, which are those three steps, and it involves machine learning.

As a consequence, modern AI systems really incorporate information from four different areas: obviously computer science, but also from neuroscience (what goes on in our brains), from cognitive science (how we think), and from mathematics, especially reasoning about uncertainty.

A Brief History of AI

One can debate how far back you want to go, but most people would point to the Dartmouth workshop in 1956 as the founding of modern AI. I was three years old at the time, so I’m as old as AI, or a little younger than AI. Three of the four founders or organizers of that workshop were MIT faculty members: John McCarthy, Marvin Minsky, and Claude Shannon. Rochester was from IBM, McCarthy eventually left to go found AI at Stanford, but we had an early role in it.

You can see the definition that they gave:

Every aspect of learning, or any other feature of intelligence, in their view can be so precisely described that you can get a machine to do it.

That was the motivation behind the founding of AI.

First Wave: Search-Based AI

For 20 years, early AI was basically search. If you wanted to prove a theorem, if you wanted to win a game, you started at some initial position, and you executed a series of steps trying to get to the goal. And if you got to the goal, great, if you didn’t and you hit a dead end, you backtracked and tried the next thing, and you did that until you explored all the space or you found a solution.

I’m sure you can quickly figure out this does not scale well. It runs into a combinatorial explosion. The number of things you have to explore becomes huge. As a consequence, for that first period, people looked at very small examples, and they made a lot of ad hoc assumptions in order to remove things they didn’t want to think about, without any real basis on how well it was going to work.

And as a consequence, after about 20 years of funding in the US and elsewhere, we hit the first AI winter, that is funding dried up, because these things were seen as just not usable.

I will point out to you, I started my own work in AI in 1975. In those days, it was something you scraped off the bottom of your shoe. It was not highly respected because it had these problems. It didn’t handle problems well.

Second Wave: Expert Systems

Second wave of AI, rise of expert systems in the 1980s. This was a focus on a particular domain and creating logical rules for deductions so that you would basically say, given what I want to accomplish, here is the natural way in which I would get to it.

There were some early commercial successes, but again, one of the struggles here was that they didn’t scale well. Even if I built a system to do Campbell’s Soup maintenance, which was the first successful AI application of which I’m aware, you couldn’t apply it to some other problem. You had to start over again. It didn’t learn. It didn’t generalize. That led to the second AI winter.

Third Wave: Modern AI

Now we’re in the third phase. The third phase is really driven by bringing in solid scientific bases from mathematics and from neuroscience. Mathematics being able to reason about problems under uncertainty and come up with a principled solution to it. Neuroscience using what we know about how our brains work to give us a guide to how we might build a real system.

And of course, we began to see early on in this phase some successes. You’ll decide for yourself, but IBM’s Deep Blue system beating the world’s chess champion was certainly an indication of the power of these systems and early commercial successes.

And today, as you all know, this is really driven by three trends:

1.