Home » Matt Beane: How Do We Learn to Work with Intelligent Machines? (Transcript)

Matt Beane: How Do We Learn to Work with Intelligent Machines? (Transcript)

This was important news for surgeons, but I needed to know how widespread it was: Where else was using AI blocking learning on the job? To find out, I’ve connected with a small but growing group of young researchers who’ve done boots-on-the-ground studies of work involving AI in very diverse settings like start-ups, policing, investment banking and online education.

Like me, they spent at least a year and many hundreds of hours observing, interviewing and often working side-by-side with the people they studied. We shared data, and I looked for patterns. No matter the industry, the work, the AI, the story was the same.

Organizations were trying harder and harder to get results from AI, and they were peeling learners away from expert work as they did it. Start-up managers were outsourcing their customer contact. Cops had to learn to deal with crime forecasts without experts support. Junior bankers were getting cut out of complex analysis, and professors had to build online courses without help.

And the effect of all of this was the same as in surgery. Learning on the job was getting much harder. This can’t last McKinsey estimates that between half a billion and a billion of us are going to have to adapt to AI in our daily work by 2030. And we’re assuming that on-the-job learning will be there for us as we try.

Accenture’s latest workers survey showed that most workers learned key skills on the job, not in formal training. So while we talk a lot about its potential future impact, the aspect of AI that may matter most right now is that we’re handling it in a way that blocks learning on the job just when we need it most.

Now across all our sites, a small minority found a way to learn. They did it by breaking and bending rules. Approved methods weren’t working, so they bent and broke rules to get hands-on practice with experts.

ALSO READ:   Anil Seth: Your Brain Hallucinates Your Conscious Reality (Full Transcript)

In my setting, residents got involved in robotic surgery in medical school at the expense of their generalist education. And they spent hundreds of extra hours with simulators and recordings of surgery, when you were supposed to learn in the OR.

And maybe most importantly, they found ways to struggle in live procedures with limited expert supervision. I call all this “shadow learning,” because it bends the rules and learners do it out of the limelight. And everyone turns a blind eye because it gets results. Remember, these are the star pupils of the bunch.

Now, obviously, this is not OK, and it’s not sustainable. No one should have to risk getting fired to learn the skills they need to do their job. But we do need to learn from these people. They took serious risks to learn. They understood they needed to protect struggle and challenge in their work so that they could push themselves to tackle hard problems right near the edge of their capacity. They also made sure there was an expert nearby to offer pointers and to backstop against catastrophe.

Let’s build this combination of struggle and expert support into each AI implementation. Here’s one clear example I could get of this on the ground. Before robots, if you were a bomb disposal technician, you dealt with an IED by walking up to it. A junior officer was hundreds of feet away, so could only watch and help if you decided it was safe and invited them downrange. Now you sit side-by-side in a bomb-proof truck.

You both watch the video feed. They control a distant robot, and you guide the work out loud. Trainees learn better than they did before robots. We can scale this to surgery, start-ups, policing, investment banking, online education and beyond. The good news is we’ve got new tools to do it.

ALSO READ:   Windows 10: Enterprise Features & Core Experience for Businesses (Transcript)

The internet and the cloud mean we don’t always need one expert for every trainee, for them to be physically near each other or even to be in the same organization. And we can build AI to help: to coach learners as they struggle, to coach experts as they coach and to connect those two groups in smart ways.

There are people at work on systems like this, but they’ve been mostly focused on formal training. And the deeper crisis is in on-the-job learning. We must do better. Today’s problems demand we do better to create work that takes full advantage of AI’s amazing capabilities while enhancing our skills as we do it. That’s the kind of future I dreamed of as a kid. And the time to create it is now.

Thank you.

Sharing is Kindness in Action!

Pages: 1 | 2 | Single Page View