3 Myths About the Future of Work (and Why They’re Not True): Daniel Susskind (Transcript)

Daniel Susskind – TRANSCRIPT

Automation anxiety has been spreading lately, a fear that in the future, many jobs will be performed by machines rather than human beings, given the remarkable advances that are unfolding in artificial intelligence and robotics. What’s clear is that there will be significant change. What’s less clear is what that change will look like.

My research suggests that the future is both troubling and exciting. The threat of technological unemployment is real, and yet it’s a good problem to have. And to explain how I came to that conclusion, I want to confront three myths that I think are currently obscuring our vision of this automated future.

A picture that we see on our television screens, in books, in films, in everyday commentary is one where an army of robots descends on the workplace with one goal in mind: to displace human beings from their work. And I call this the Terminator myth. Yes, machines displace human beings from particular tasks, but they don’t just substitute for human beings. They also complement them in other tasks, making that work more valuable and more important. Sometimes they complement human beings directly, making them more productive or more efficient at a particular task. So a taxi driver can use a satnav system to navigate on unfamiliar roads. An architect can use computer-assisted design software to design bigger, more complicated buildings.

But technological progress doesn’t just complement human beings directly. It also complements them indirectly, and it does this in two ways. The first is if we think of the economy as a pie, technological progress makes the pie bigger. As productivity increases, incomes rise and demand grows. The British pie, for instance, is more than a hundred times the size it was 300 years ago. And so people displaced from tasks in the old pie could find tasks to do in the new pie instead. But technological progress doesn’t just make the pie bigger. It also changes the ingredients in the pie. As time passes, people spend their income in different ways, changing how they spread it across existing goods, and developing tastes for entirely new goods, too.

New industries are created, new tasks have to be done and that means often new roles have to be filled. So again, the British pie: 300 years ago, most people worked on farms, 150 years ago, in factories, and today, most people work in offices. And once again, people displaced from tasks in the old bit of pie could tumble into tasks in the new bit of pie instead.

ALSO READ:   The Self-Assembling Computer Chips of the Future: Karl Skjonnemand (Transcript)

Economists call these effects complementarities, but really that’s just a fancy word to capture the different way that technological progress helps human beings. Resolving this Terminator myth shows us that there are two forces at play: one, machine substitution that harms workers, but also these complementarities that do the opposite.

Now the second myth, what I call the intelligence myth. What do the tasks of driving a car, making a medical diagnosis and identifying a bird at a fleeting glimpse have in common? Well, these are all tasks that until very recently, leading economists thought couldn’t readily be automated. And yet today, all of these tasks can be automated. You know, all major car manufacturers have driverless car programs. There’s countless systems out there that can diagnose medical problems. And there’s even an app that can identify a bird at a fleeting glimpse.

Now, this wasn’t simply a case of bad luck on the part of economists. They were wrong, and the reason why they were wrong is very important. They’ve fallen for the intelligence myth, the belief that machines have to copy the way that human beings think and reason in order to outperform them. When these economists were trying to figure out what tasks machines could not do, they imagined the only way to automate a task was to sit down with a human being, get them to explain to you how it was they performed a task, and then try and capture that explanation in a set of instructions for a machine to follow. This view was popular in artificial intelligence at one point, too. I know this because Richard Susskind, who is my dad and my coauthor, wrote his doctorate in the 1980s on artificial intelligence and the law at Oxford University, and he was part of the vanguard. And with a professor called Phillip Capper and a legal publisher called Butterworths, they produced the world’s first commercially available artificial intelligence system in the law. This was the home screen design. He assures me this was a cool screen design at the time.

I’ve never been entirely convinced. He published it in the form of two floppy disks, at a time where floppy disks genuinely were floppy, and his approach was the same as the economists’: sit down with a lawyer, get her to explain to you how it was she solved a legal problem, and then try and capture that explanation in a set of rules for a machine to follow. In economics, if human beings could explain themselves in this way, the tasks are called routine, and they could be automated. But if human beings can’t explain themselves, the tasks are called non-routine, and they’re thought to be out reach.

ALSO READ:   Prioritizing Mental Health in Schools: Hailey Hardcastle (Transcript)

Today, that routine-nonroutine distinction is widespread. Think how often you hear people say to you machines can only perform tasks that are predictable or repetitive, rules-based or well-defined. Those are all just different words for routine. And go back to those three cases that I mentioned at the start. Those are all classic cases of nonroutine tasks. Ask a doctor, for instance, how she makes a medical diagnosis, and she might be able to give you a few rules of thumb, but ultimately she’d struggle. She’d say it requires things like creativity and judgment and intuition. And these things are very difficult to articulate, and so it was thought these tasks would be very hard to automate. If a human being can’t explain themselves, where on earth do we begin in writing a set of instructions for a machine to follow?

Thirty years ago, this view was right, but today it’s looking shaky, and in the future it’s simply going to be wrong. Advances in processing power, in data storage capability and in algorithm design mean that this routine-nonroutine distinction is diminishingly useful.

To see this, go back to the case of making a medical diagnosis. Earlier in the year, a team of researchers at Stanford announced they’d developed a system which can tell you whether or not a freckle is cancerous as accurately as leading dermatologists. How does it work? It’s not trying to copy the judgment or the intuition of a doctor. It knows or understands nothing about medicine at all. Instead, it’s running a pattern recognition algorithm through 129,450 past cases, hunting for similarities between those cases and the particular lesion in question. It’s performing these tasks in an unhuman way, based on the analysis of more possible cases than any doctor could hope to review in their lifetime. It didn’t matter that that human being, that doctor, couldn’t explain how she’d performed the task.

Now, there are those who dwell upon that the fact that these machines aren’t built in our image. As an example, take IBM’s Watson, the supercomputer that went on the US quiz show “Jeopardy!” in 2011, and it beat the two human champions at “Jeopardy!” The day after it won, The Wall Street Journal ran a piece by the philosopher John Searle with the title “Watson Doesn’t Know It Won on ‘Jeopardy!'” Right, and it’s brilliant, and it’s true. You know, Watson didn’t let out a cry of excitement. It didn’t call up its parents to say what a good job it had done. It didn’t go down to the pub for a drink. This system wasn’t trying to copy the way that those human contestants played, but it didn’t matter. It still outperformed them.

ALSO READ:   Sugar...It's Not So Sweet: Calgary Avansino at TEDxMoorgate (Full Transcript)

Resolving the intelligence myth shows us that our limited understanding about human intelligence, about how we think and reason, is far less of a constraint on automation than it was in the past. What’s more, as we’ve seen, when these machines perform tasks differently to human beings, there’s no reason to think that what human beings are currently capable of doing represents any sort of summit in what these machines might be capable of doing in the future.

Now the third myth, what I call the superiority myth. It’s often said that those who forget about the helpful side of technological progress, those complementarities from before, are committing something known as the lump of labor fallacy. Now, the problem is the lump of labor fallacy is itself a fallacy, and I call this the lump of labor fallacy fallacy, or LOLFF, for short. Let me explain. The lump of labor fallacy is a very old idea. It was a British economist, David Schloss, who gave it this name in 1892. He was puzzled to come across a dock worker who had begun to use a machine to make washers, the small metal discs that fasten on the end of screws. And this dock worker felt guilty for being more productive. Now, most of the time, we expect the opposite, that people feel guilty for being unproductive, you know, a little too much time on Facebook or Twitter at work. But this worker felt guilty for being more productive, and asked why, he said, “I know I’m doing wrong. I’m taking away the work of another man.” In his mind, there was some fixed lump of work to be divided up between him and his pals, so that if he used this machine to do more, there’d be less left for his pals to do. Schloss saw the mistake. The lump of work wasn’t fixed. As this worker used the machine and became more productive, the price of washers would fall, demand for washers would rise, more washers would have to be made, and there’d be more work for his pals to do. The lump of work would get bigger. Schloss called this “the lump of labor fallacy.”

Pages: First |1 | ... | | Last | View Full Transcript

Scroll to Top