3 Myths About the Future of Work (and Why They’re Not True): Daniel Susskind (Transcript)

Automation anxiety has been spreading lately, a fear that in the future, many jobs will be performed by machines rather than human beings, given the remarkable advances that are unfolding in artificial intelligence and robotics. What’s clear is that there will be significant change. What’s less clear is what that change will look like.

My research suggests that the future is both troubling and exciting. The threat of technological unemployment is real, and yet it’s a good problem to have. And to explain how I came to that conclusion, I want to confront three myths that I think are currently obscuring our vision of this automated future.

A picture that we see on our television screens, in books, in films, in everyday commentary is one where an army of robots descends on the workplace with one goal in mind: to displace human beings from their work. And I call this the Terminator myth. Yes, machines displace human beings from particular tasks, but they don’t just substitute for human beings. They also complement them in other tasks, making that work more valuable and more important. Sometimes they complement human beings directly, making them more productive or more efficient at a particular task. So a taxi driver can use a satnav system to navigate on unfamiliar roads. An architect can use computer-assisted design software to design bigger, more complicated buildings.

But technological progress doesn’t just complement human beings directly. It also complements them indirectly, and it does this in two ways. The first is if we think of the economy as a pie, technological progress makes the pie bigger. As productivity increases, incomes rise and demand grows. The British pie, for instance, is more than a hundred times the size it was 300 years ago. And so people displaced from tasks in the old pie could find tasks to do in the new pie instead. But technological progress doesn’t just make the pie bigger. It also changes the ingredients in the pie. As time passes, people spend their income in different ways, changing how they spread it across existing goods, and developing tastes for entirely new goods, too.

ALSO READ:   Naja Ferjan Ramirez on Creating Bilingual Minds at TEDxLjubljana (Transcript)

New industries are created, new tasks have to be done and that means often new roles have to be filled. So again, the British pie: 300 years ago, most people worked on farms, 150 years ago, in factories, and today, most people work in offices. And once again, people displaced from tasks in the old bit of pie could tumble into tasks in the new bit of pie instead.

Economists call these effects complementarities, but really that’s just a fancy word to capture the different way that technological progress helps human beings. Resolving this Terminator myth shows us that there are two forces at play: one, machine substitution that harms workers, but also these complementarities that do the opposite.

Now the second myth, what I call the intelligence myth. What do the tasks of driving a car, making a medical diagnosis and identifying a bird at a fleeting glimpse have in common? Well, these are all tasks that until very recently, leading economists thought couldn’t readily be automated. And yet today, all of these tasks can be automated. You know, all major car manufacturers have driverless car programs. There’s countless systems out there that can diagnose medical problems. And there’s even an app that can identify a bird at a fleeting glimpse.

Now, this wasn’t simply a case of bad luck on the part of economists. They were wrong, and the reason why they were wrong is very important. They’ve fallen for the intelligence myth, the belief that machines have to copy the way that human beings think and reason in order to outperform them. When these economists were trying to figure out what tasks machines could not do, they imagined the only way to automate a task was to sit down with a human being, get them to explain to you how it was they performed a task, and then try and capture that explanation in a set of instructions for a machine to follow. This view was popular in artificial intelligence at one point, too. I know this because Richard Susskind, who is my dad and my coauthor, wrote his doctorate in the 1980s on artificial intelligence and the law at Oxford University, and he was part of the vanguard. And with a professor called Phillip Capper and a legal publisher called Butterworths, they produced the world’s first commercially available artificial intelligence system in the law. This was the home screen design. He assures me this was a cool screen design at the time.

ALSO READ:   Kate Stafford: How Climate Change is Altering the Underwater Soundscape (Transcript)

I’ve never been entirely convinced. He published it in the form of two floppy disks, at a time where floppy disks genuinely were floppy, and his approach was the same as the economists’: sit down with a lawyer, get her to explain to you how it was she solved a legal problem, and then try and capture that explanation in a set of rules for a machine to follow. In economics, if human beings could explain themselves in this way, the tasks are called routine, and they could be automated. But if human beings can’t explain themselves, the tasks are called non-routine, and they’re thought to be out reach.

Today, that routine-nonroutine distinction is widespread. Think how often you hear people say to you machines can only perform tasks that are predictable or repetitive, rules-based or well-defined. Those are all just different words for routine. And go back to those three cases that I mentioned at the start. Those are all classic cases of nonroutine tasks. Ask a doctor, for instance, how she makes a medical diagnosis, and she might be able to give you a few rules of thumb, but ultimately she’d struggle. She’d say it requires things like creativity and judgment and intuition. And these things are very difficult to articulate, and so it was thought these tasks would be very hard to automate. If a human being can’t explain themselves, where on earth do we begin in writing a set of instructions for a machine to follow?

Pages: First | 1 | 2 | 3 | Next → | Last | Single Page View

Scroll to Top