
Full transcript of novelist David Simpson’s talk titled “Our Post-Human Future” at TEDxSantoDomingo conference.
Listen to the MP3 Audio here:
TRANSCRIPT:
David Simpson: Hello. Are you guys ready to hear about the craziest subject that you have ever heard about?
Audience: Yeah.
David Simpson: Yeah? I’m ready to tell you about it. They say that you don’t really choose your passions; your passions choose you.
And when I was a 27-year-old graduate student at the University of British Columbia, my passion chose me big-time. And it was because I heard about the topic – the topic I’m going to be telling you guys about today – and it just changed me forever. This is how much it’s changed me; it changed me so much that since then, I’ve actually written six books on the subject, I’ve directed a short film on the subject, I’m doing this TEDx on this subject, I’m adapting a graphic novel on this subject.
And I’m just going to keep writing about it and talking about it because once you learn about it, it will change your view of the world so much, so profoundly, you can’t unlearn what you’re going to learn today. Isn’t that exciting?
Audience: Yeah.
David Simpson: You can’t unlearn this. It will change the way you think about the future; it will change the way you think about humanity; it’ll change the way you think about fabric of the universe. It’s really amazing, and that topic is ‘the technological singularity.’
Okay, so, for anybody who hasn’t heard about the term before, some of you might be a little bit familiar, I’ll take a stab at trying to describe what it is. A lot of people have talked about it but basically what it is, is pretty simple, really, is that within about the next 15 years, the human species is actually going to develop superhuman level machine intelligence.
Now, I’m in the camp that believes this is actually true; and some of the wealthiest tech billionaires in the world are also in that camp, some of the most famous scientists are also in that camp. Now, once that’s happened, that’s going to really change the fabric of what it means to be human. I want to give you an example.
If we were to use, say Albert Einstein, as our litmus test. If Albert Einstein were our guru for smartest person who ever lived, think about the impact that Albert Einstein’s intelligence had on humanity. He was able to undo 200 years of Newtonian physics. How incredible is that. He was able to figure out black holes; he was able to figure out the time was relative; he was able to change our view of the universe itself.
Now, what if we had access to a superhuman level intelligence that was just Einstein plus 50 percent, or what if we had access to Einstein times 2 or Einstein times 10? And the truth is that you really would not be able to comprehend it; you wouldn’t be able to comprehend it unless somehow your own intelligence was actually amplified as well.
And the amazing thing is, this is coming soon. So says the man who coined the term, Vernor Vinge. Now, Vernor Vinge was a math professor; I say was, he’s still alive, he’s just retired. And he was a computer science and mathematics professor, but he’s also a damn good science fiction novelist. And he wrote a paper back in 1993, and it’s important to do the math on this actually; this 1993, it’s 22 years ago, and the paper was called ‘The coming technological singularity: how to survive the post-human era.”
Now, I’m going to sort of paraphrase what his abstract was, but it was pretty blunt. And it was within the next thirty years, we will have achieved superhuman level machine intelligence, shortly thereafter, the human era will have ended.
Now remember, within thirty years; it was 22 years ago. Now, he was a little bit more specific though. He said he’d be surprised if it happened before 2005, which obviously it didn’t. But he also said he’d be surprised if it happened later than 2030. This is pretty amazing stuff.
Now, when we talk about the human era having ended, that can terrify some people. So, today, I don’t want you to be afraid. Please don’t be scared. I’m not here to tell you to stock up on shotgun shells and hide underground. That’s not what we’re trying to do today.
There is enough dystopian and post-apocalyptic literature and film out there. I don’t really want to add to that today. What I want to do is talk about a different sort of version of the future; a version of the future where we actually successfully manage to achieve superhuman level of machine intelligence, and it doesn’t kill us all. That’d be kind of nice, right? It’s a good plan.
And I’ll posit a plan for this that, I think, will actually make a lot of sense; we’ll talk about that. And what I promise you is this; the version of the future that I’m going to be talking to you about today is going to be unlike anything you’ve ever seen in popular media before, because the truth is we can’t really conceptualize what would be a post-singularity future; we can’t unless we were, you know, upgraded, but why don’t we try anyway.
Yeah, let’s give it a shot and it’ll be fun. It really is something incredible. But I also can be honest with you. I have to be honest with you. Technology is always a double-edged sword, so if I just told you, “Hey, we got utopia on the way,” I’d be lying to you.
I think superhuman level intelligence is very comparable. A good analogy would be fire. Fire obviously was overall a very good thing for the human species; it allowed us to be able to see in the dark, to stay warm. We were able to fashion tools with it, cook our food. But of course, we also used it for weapons. And sometimes people just died because of accidents as well.
[read more]
So, it changed the world for the better but there was a downside; there were dangers. Superhuman level machine intelligence should change the world for the better, and far more profoundly and exponentially than fire did. But there are dangers, so we are going to talk about those a little bit.
But before I get to that, you’re probably wondering how the heck did this guy come to believe these crazy things and come to TED and start telling us this. And how is it that these tech billionaires and these people who are the most famous scientists alive, I’ll name some of them, you know, Bill Gates, Stephen Hawking, Elon Musk, how did they come to believe that we’re so close to achieving superhuman level machine intelligence? And it has to do with this fellow, his name is Gordon Moore.
Back in 1965, Gordon Moore was just asked to write a really innocuous little paper, because he worked at an R&D department for a company that built semiconductors. And they said; “Well, what do you think the trends will be in the next few years?”
And while he was doing research, he noticed something; and what he noticed was that every year, the amount of components that you could fit on an integrated circuit doubled. Now, this is incredible. It means that computers actually double in processing power roughly every year. And this has more profound implications than just simply that your iPad or your iPhone will be twice as powerful than the previous generation, although that’s really cool.
To really understand the profound implications, you have to understand the difference between exponential and linear. If we think about the future in a linear way, so let’s say we take 30 steps, one, two, three, four, five. At step 30, we’re at 30; and that’s how we’ve evolved to think about the future.
But exponential is different. It’s two, four, eight, sixteen, thirty-two. When you get to step 30, you’re at a billion. What this means is that computers, 30 years from now, are going to be a billion times more capable than the computers that we have today. Now, we can really start to open our minds and start to think about what could you do with that kind of technology.
So, how about this? How about nano-machines that were so small, but powerful, that they could fit inside of your body. These would be pattern recognizing machines that could fit inside your body and actually pattern recognized pathogens like HIV or cancer; find them, target them, kill them.
There’s a little animation to sort of give you an idea of what it’d be like. And then imagine a couple years later when they get even more developed.
Now, we can go inside the cell. We can identify parts of the cell, like the telomeres. Now telomeres, for those of you who don’t know, they’re like our cellular clocks; when we age, our telomeres shrink. Like that. What if we could go inside the cell and rebuild the caps of the telomeres. We have a drug these days that does that, called ‘telomerase.’
But what if we could do it more precisely with nanotechnology. What that means is, we could actually set our age. What we’re talking about is picking an age, 22, 25, 29, whenever your peak was. Immortality is now on the menu, ladies and gentlemen. and the same type of advanced nanotechnology could also cross the blood-brain barrier; you get into your brain, it could actually make physical connections to the neurons in your brain, and that would allow for something called ‘full-immersion virtual reality.’ You could live in something like the Matrix. I’m sure you’ve all seen the movie ‘The Matrix.’ It would be like that.
But even beyond that, you could also have augmented reality built right into your brain, onboard mental computers. This would be pretty amazing, be like Google glass or something. But instead of the technology being at your fingertips, it’s already in your brain.
Now, this might be something that’s hard for you guys to conceptualize, and I understand because you’ve never seen it before in popular media. But luckily, I have a clip.
Okay. So, imagine this is you. You’re waking up in your dream house from a really restful REM sleep, because your nano-bots are waking you up out of, at the perfect moment. There we have our onboard mental computer. This guy could be a hundred years old but he looks like he’s 25, he’s handsome and he’s got this technology built right into his brain. But I want to go beyond even just that. Think about how beautiful this home is; his home is gorgeous.
Well, another thing that becomes a possibility is Molecular Nano assemblers. So, it’s something that we call fog lights. So, I’ll give you an idea of what this would be like. The home itself could be built by these fog lights. Now, you guys have all heard of 3D printing. We can scan things right now, on the level of microns.
But with computers as powerful as what we’re talking about, we could scan things molecularly; that means that anything that you can scan and put into a virtual space, you could actually then recreate with these Nano assemblers. You could have anything that you want.
So, in this instance, glass of orange juice, really nice and simple. And the foggle, it’s just dissipated away. This guy does not have to do the dishes. Kind of nice. And his house, you might change his house every day. Now, what this means is that the bottom needs of Maslow’s hierarchy of needs are being taken care of.
For those of you who don’t know what Maslow’s hierarchy of needs means, the reason why it’s a pyramid is because we spend most of our time actually taking care of our baser needs; so things like water, food, clothing, security. It’s the reason why you work that job that you hate, for money so that you can put food on the table, clothes on your back, take care of your family. And it takes away your time for social things, for esteem, but also for self-actualization; and self-actualization means living your dreams.
In a post-human world, it would look like this. Since the bottom needs are taken care of, you get to spend most of your time socializing, building up your esteem, and also living your dreams. What is your dream? Whatever that dream is, you can live it.
Now, some people may say however, ‘yeah, it’s all well and good David, but you’re forgetting one really important thing, and that is that Moore’s law will end.’ Okay, there’s a kernel of truth to this, a kernel of truth to this. Although people have been predicting Moore’s law is going to end since 2002 and it’s still going really strong, but the reason why they say that is because when a silicon transistor gets small enough, you try to put an electron through it actually, it just like evaporates.
So, people have been predicting that eventually computers are going to stop doubling in their potential. But the director of engineering for Google, Ray Kurzweil, he looked back further than 1965, and what he realized is that this exponential trend has been going on since before the integrated circuit was even invented; it’s been going on all the way back to the 1890 American census.
This is actually just the fifth paradigm in the exponential increase in computing technology, and there are plenty candidates for number six; how about quantum computers or optical computers?
But I’m going to make the argument that we probably wouldn’t even need to reach that level, before we reach these time periods. There’s Ray Kurzweil, the aforementioned, he’s actually the world’s most noted futurist, and Vernor Vinge, remember those dates; 2030 was when Vernor Vinge said he’d be surprised that we did not have superhuman level of machine intelligence before then; Ray Kurzweil said we’ll have human level machine intelligence by 2029. These are two geniuses looking at the same trend lines and coming up with the same incredible conclusions.
Now, if I’ve got you so far and you believe that this could be possible, you might be thinking, ‘but why aren’t we hearing about this all the time?’ Our politicians are still signing treaties, and the treaties are dealing with decades, sometimes half a century, sometimes a century.
Why don’t we know more about this? Well, it’s not for lack of trying; Stephen Hawking, and Elon Musk, and Bill Gates have been talking to world leaders and trying to tell them, ‘Hey, we’re about to invent the most powerful technology ever.’ I want to give you a sense of how powerful intelligence actually is, and I’m going to explain something called ‘universal phase transition.’ It’s actually simpler than it sounds.
You guys remember a couple years ago, when at the Large Hadron Collider, they were looking for the god particle; they were looking for the Higgs boson, and they’ve actually detected some evidence. They think the Higgs boson is real, and that means that the Higgs field isn’t really just theoretical anymore; it’s probably real.
And it didn’t take Stephen Hawking very long to figure out that if you hit it with enough energy, it becomes metastable. And what that means is, you could destroy the universe with it. You start a runaway implosion of the universe.
And once again, I’ve got a clip. Imagine this at the speed of light, a black hole expanding out. And as Hawking points out, this could have started on the other side of the universe, billions of years ago, and we wouldn’t even know about it because it’s so far away. And we wouldn’t see it in time; it would just hit us.
Now, Stephen Hawking is not a super intelligence. Stephen Hawking is a pretty smart guy. He might even be the smartest man in the world, but he’s not Einstein plus 50 percent, and he’s already figured out how to destroy the universe. So, what I’m saying is intelligence is pretty powerful, and that’s something that we have to recognize and understand.
Now, you might be thinking it, okay, you got me. Obviously, super intelligence is super dangerous; we must get together and ban it right away. Well, the problem is banning strong AI just simply won’t work. Okay, so first of all, the reason why it won’t work is because it disobeys the law of accelerating returns.
So, right now, maybe Google or IBM are required to develop strong super intelligence, artificial super intelligence, but eventually, a group as small as maybe just six people will be able to build it; that’s how the law of accelerating returns work. So really, all we’re going to be doing is just delaying the inevitable. So, it is unenforceable.
And here’s another important point. It turns out that unfettered AI quite obviously is more powerful than fettered AI. So, if we try to regulate it, the problem is that it’s unlikely other countries are actually going to follow those rules. Why? Because the country that attains, or the entity that attains strong AI first has dominant strategic advantage for ever. That’s the nature of strong AI.
Once you have a strong AI, it exponentially increases. So, even if you’re second place by a few days, you’ll always be dramatically behind whoever was the winner. So, then you might say, okay, I believe you. We’re definitely going to have strong superhuman level machine intelligence, but maybe we can box it in, maybe we can hold it in. This is what I call the outrageous hypothesis.
Maybe you could convince me that Einstein plus 50 percent could be kept inside of some sort of box. It’d be like a genie in a bottle. It could carry out our wishes, maybe we could keep it in. But Einstein times 2, Einstein times 10, this is extremely unlikely. This is just not going to work. We’re not going to be able to keep it boxed out.
So, I’ve been talking about how powerful artificial intelligence, super human artificial intelligence could be, and how it could destroy us.
But the important thing to remember here is, well why? Why would it want to destroy us? And the good news is that there aren’t really that many good reasons. So, there is reasons to be optimistic. Now, there is one argument that holds some water, with me, and that is put forth by Elon Musk – a very intelligent man. And he points out that if we have a rapid recursive increase of a computer that’s really inhuman, and it just wants to improve itself – this would be self-improvement – and it reaches a strategic superpower, then, as he puts it, “…then that’s all she wrote.” Then you get into those scenarios that Nick Bostrom talks about in his novel ‘super intelligence’ where maybe, it decides to use every molecule on the planet to increase its own capability; just paves the planet with solar panels to charge up its mainframe inside.
But this is really unlikely. This would be if we were being very careless. And this is why these guys are warning us about this, so that we won’t be careless like this. I want to remind you of something. Now, this is where the positive stuff starts. I want to remind you of your humanity.
And one of the greatest things about your humanity is that in the human genome is heroism; these are pictures, not from movies, these are real heroes. These are people that actually will risk their lives, some of them even end up giving their lives, not just for loved ones, which is heroic enough, but for complete strangers. How amazing is that.
So, what I would argue is instead of being so focused on building super human level machine intelligence, we should be focused on building superhero level machine intelligence. Now, that might sound a little bit absurd, like any good paradox, but when we really think about it, it turns out this might be our best avenue for success.
I actually wrote a book about this, and the book recently won a literary award. And it was for metaphysics, and it was because I put forward an idea for how we might be able to test, not a superhuman level machine intelligence but a human level machine intelligence; obviously too complicated for me to tell you everything about it, but here’s the broad-strokes.
This human level machine intelligence would think it was human, and would inhabit a world that it thought was real – a virtual world. And would be contacted, at some point, by post-humans. So, beams that say, oh! You know what? That’s not the real world. Actually you’re living in an ancestor simulation. We’ve transcended way beyond this long ago, and you could come with us. You could transcend, but there’s a catch.
There’s a linchpin program; it’s encoded in your virtual DNA. If you leave this world, it will turn off. And so, all the virtual people who live in this world with you will also turn off. Into these post-humans, they would make the arguments like turning off your X-box or turning off your PS4; it doesn’t really matter, right? We’ve transcended way beyond this.
These people aren’t important. But if that human level machine intelligence says, all right, I’m going. That’s the type of machine intelligence we don’t want to allow out of the box. But if it refuses to upgrade, if it stays because it identifies, it empathizes with the other virtual beings that it inhabits this world with, then we may have just found our superhero; we may have just found our first human level machine intelligence that we could upgrade beyond the human level. And that would be pretty amazing.
Now, you’re probably still thinking that would be pretty scary, but I want to remind you of something else too. These gentlemen here, just because you are very intelligent does not automatically make you evil. Because you are very intelligent, actually it should, in most cases, increase your emotional intelligence. It should increase your empathy.
Bill Gates, for instance, is the world’s richest man, self-made, world’s richest man, and he’s a philanthropist. And there’s Elon Musk. Elon Musk already has a superhero based on him; Tony Stark from the Iron Man movies is based on him. These are good people.
What I’m saying is, if you find a human level machine intelligence that is actually pretty impressive and seems to exhibit empathy, and then you increase its capabilities, you should actually increase its empathy. It should become even more good.
Now, a lot of people say that AI is our last invention, especially superhuman level machine intelligence; that that’s the last invention of humanity. And there’s a good reason for that, and I certainly understand why they say that. But I prefer to think of it a different way.
I prefer to think of it like this that the superhuman level machine intelligence that we create, its first invention will actually be post-humanity. What I mean by that is it will be able to reach back and pull us over the chasm, to the dawn of the singularity, to the post-singularity world, and teach us how to amplify our biological substrate. It might even teach us how to go beyond our biological substrate.
I can imagine a time in the future where our brains, our biological brains are only doing 0.00001 percent of our actual thinking. And at that point, it won’t really matter whether or not we’re a virtual super intelligence, or whether or not we’re a virtual super intelligence that used to have a biological substrate. In the end, we’ll all just be super intelligences living in a post-human world. Thank you very much.
Resources for Further Reading:
Martin Luther King Jr. on Why Jesus Called a Man a Fool Speech (Transcript)
Billy Graham: Who is Jesus, Really? (Full Transcript)
Artificial Intelligence: It Will Kill Us by Jay Tuck (Full Transcript)
Peter Haas: The Real Reason to be Afraid of Artificial Intelligence (Full Transcript)
Luca Longo: The Turing Test, Artificial Intelligence and the Human Stupidity (Transcript)
[/read]
Multi-Page