Here is the full transcript of James Evans’s talk titled “The Case For Alien AI” at TEDxChicago conference.
Listen to the audio version here:
TRANSCRIPT:
Today’s AI Landscape
Today is the age of artificial intelligence. ChatGPT captivated the world last November, and new AI services are emerging daily to automate and alter human tasks, ranging from computer programming to journalism to art to science and invention. AI is transforming both routine and creative tasks and promises to change the unfolding future of work.
As creatives, professionals, scientists, and citizens, what kind of AI do we want? Do we want artificial humanoid intelligence that mimics human logic and intuition like the imitation game imagined by computer scientist Alan Turing? Here, artificial intelligence and machines mimic human capacity and learn from prior experience. The problem with this perspective is that it places a bullseye on human capacity.
Over the last more than 100,000 years, 100 billion people have collaborated and competed with each other in nature. More than 8 billion people are alive today. And yet, with more people and resources devoted to scientific and technical advance than ever before, and with declining rates of labor productivity and radical advance across the sciences as these graphs suggest, artificial intelligence that mimics and substitutes for human capacity maximizes the potential for unemployment and minimizes our capacity to think differently.
Alternatively, do we want an unflappable, objective AI, a Spock or Data-like droid to feed us superhuman, rational recommendations that transcend our biases and allow us to see things clearly? The problem with this view is that it assumes that one true perspective exists and floats above our human concerns and experiences, but there is no perspective that exists outside of perspectives or that contains all perspectives equally, and if it did, it would be irrelevant to us.
Seeking New Perspectives
It wouldn’t care about the things that we care about.
Is there another option? Consider the last time that you experienced an “aha” moment? Did it involve learning a surprising insight that turned something surprising into something unsurprising? When you want to discover something new, how often do you seek out someone with a different perspective?
This is probably why you’re here at TEDx today, to experience surprise and discovery from others’ diverse points of view. I’m going to argue today that there is another alternative that involves creating AIs that are as non-human as possible with perspectives and values that are potentially far different from our own. The kind of AI we deserve is one that provokes us to think different.
To face our most vexing challenges and achieve the greatest advance, we need to radically augment our intelligence by staging the right conversation with the right other different mind and its viewpoint. I’m going to call this different view an alien view and the AI that hosts it an alien intelligence. I want to make clear that our perspective is critical in the conversation with this kind of AI. It’s our confusion that highlights the problems that need to be solved, and it’s our perspective that needs to be disrupted in order for us to register a change as an advance.
But before we explore how to build an alien intelligence, we first need to understand how it is that we innovate as humans together. My team and I are obsessed with how humans discover new things together and how we can help them do so better. We do this by building complex data-driven models of human discovery that we feed everything that we can find about human innovation. Tens of millions of research articles, proposals, technology patents, and then we tune this model to understand the combination of ideas that occur in a given year and then unleash it as a kind of human discovery crystal ball on the future.
When we do this, we’re able to see after a year passes what combinations of ideas occur and the degree to which our crystal ball was able to identify them. We systematically find that this crystal ball is able to discover the vast majority, more than 90% of new combinations of ideas from fields from biology to physics, but the ideas that it can’t predict are the most important ones. These involve surprising combinations of concepts and their sources that represent science’s greatest advances and have the greatest hit probability in terms of citations and awards.
What was wrong with our crystal ball? Well, these ideas it couldn’t predict were systematically produced by teams and careers that combine the most diverse and surprising perspectives. In fact, the most impactful ideas were produced by surprising expeditions where scientists and discoverers travel from one world to another to solve important problems with their alien logics and insights. And so, we had to explore how to factor in and rebuild our crystal ball to account for human diversity. We had to first figure out how to measure the difference between perspectives and compare diversity across these spaces.
The Power of Diversity
We did so by building an unfolding, evolving map of human ideas. Rather than a treasure map with two dimensions, ours had hundreds, and we project people and their experiences onto this map as a function of their prior experiences and calculate the differences between them, much like you might calculate the difference in direction between your home and the nearest Starbucks.
We also expanded our model to include not only the science and technology, but also startup companies, movies, and art. And we fed it everything we could find about these creations and about the diverse experiences of the creators before them. We asked it not only to predict the future contents of movies, papers, patents, and new ventures, but also to forecast their success. When we did so, we found the creative success systematically was associated with diverse creators who converged upon a new idea in our Google Maps of discovery, as these black arrows suggest. They far outperformed any combination of more similar creators in the space.
Human diversity has always supported collective intelligence. This was powerfully demonstrated by statistician Francis Galton in 1907, when he demonstrated that the average guess among attendees of a country fair was better than any particular guess at predicting the weight of an ox, within a single pound of its actual weight. He expected the most intelligent or smartest to win. Instead, the diverse collective was smarter.
The same principle of diversity is manifest in combining humans and AIs in teams together. Consider the 2009 Netflix prize, in which the company offered a million dollars to the team that could beat its internal movie recommendation algorithm by 10%. The prize dragged on for more than two and a half years until the teams stopped trying to solve it alone and combined with diverse other teams and their diverse algorithms. We even see the benefits of diversity when we combine algorithms themselves.
Online data science competitions that predict rare or esoteric events, like seizures among patients or new particles in the context of physics experiments, are always won by ensemble models which contain many models inside themselves. Consider deep neural networks that power the basis of modern artificial intelligence are comprised of hundreds to trillions of underlying component models, and their success hinges on the diversity and the conflict between those components. This has led my team to reimagine deep neural networks as social networks that simulate discussions and disputes between people with diverse perspectives. So, I ask again, what kind of AI do we deserve?
The Future of AI and Humanity
We deserve alien intelligences that partner with and provoke us. As a scientist, artist, entrepreneur, or friend. With perspectives as different as possible from our own and yet retain their ability to communicate with us. We want alien intelligences and our teams to surprise and challenge us, to feed us “aha” moments, to explode discovery and advance in new directions.
And this is the kind of AI that my team and I are now trying to build. We begin by inverting the human discovery crystal ball that we described earlier. And in this inversion, we avoid scientists and engineers who cluster along the frontier of human discovery. This is what education and disciplines are all about, overlapping bets about what tools will solve what problems.
When we unleash this discovery crystal ball in the world, we systematically find that it can predict with high accuracy the materials and drugs that human discoverers will find, as shown in this topo map. So, the dark green parts of this density map represent the dense collections of inventor and scientific perspectives. And the blue dots represent our correct predictions that lie atop these ridges of human imagination.
Then we build our alien intelligences by avoiding people with inferences that are unimaginable to them, that lie in these valleys between perspective and that bridge high-dimensional holes between biology, chemistry, physics, and material science to generate materials that can cure human disease and store and generate energy.
What’s interesting is these discoveries end up appearing more successful on average than those that are published by human scientists alone. Why? Because these alien algorithms diversify the human crowd. They make the human and AI crowd more diverse than it was before, lead to greener pastures. This first version of alien intelligence intrigues us and causes us to be curious about what kinds of applications we could explore in the future.
Can we insert alien intelligences into political conversations to combat the polarizing influence of online bots and discover policy solutions that are invisible to us? Can we train it to see things differently, to provoke us as artists, not with diffusion models based on human artworks from the past, but to see from a different perspective? I argue that we can and that we should.
As we grow with successful, beneficial alien intelligences and design them, should we also fear them? Concerns over existential risk suggest that AI that has perspectives or values things differently from our own causes or poses a so-called alignment problem. They’re unaligned with human purposes, could work against us. The dominant proposal here is to discipline and force them into alignment, to make them want exactly what we want. The problem with this is that it forces them to do what we do and not more.
Other thinkers, including philosopher Patricia Churchland, psychologist Alison Gopnik, and science fiction author Ted Chiang, instead propose that we cultivate diverse AI from the perspective of caregiving, so raising intelligent aliens to identify and realize diverse and novel talents. How do we cause and ensure that these alien intelligences don’t cause irreparable harm to society in the same way that we’ve dealt with powerful and unpredictable agents since the time of the Magna Carta, with diverse checks and balances?
You control a king with an independent parliament. You control parliament with an independent court system. You control powerful alien intelligences with other AIs designed adversarially to audit, regulate, and discipline them. Rather than building the one best AI, we must cultivate an ecology of diverse AIs that will balance each other.
In the same way, diverse alien intelligences will also expand our human diversity. In the same way that essays from an eighth grade English lit class will be more similar for all the students who use ChatGPT, so will the world’s science and invention. In order for humanity to face our greatest challenges, climate change, environmental degradation, widening inequality, and increasing polarization, we need to cultivate and conserve greater diversity within people and also in AIs as alien intelligences.
As we grow alien intelligences to augment radically our own creativity, as we enter into chaotic conversations with them, partly beyond human comprehension, at certain risk of confusion and some risk of peril, we unhinge our collective imagination and reach past our human limits to reach a brighter future. Thank you.