“Our modest advantage in general intelligence has led us to develop language, technology, and complex social organisations,” Nick Bostrom writes in the preface of his bestseller titled Superintelligence. Since its release in 2014, Bostrom’s Superintelligence has set the context for AI and its possible influence on our society.
It is important for people to understand that artificial intelligence is no longer a dystopian scientific concept or technology waiting to reach maturity. Instead, AI is seemingly everywhere and its impact is not always well defined.
While there are now education courses in the field of AI at major universities, they often do a good job of explaining what AI does but not necessarily a good job at where AI is headed. Thus, it becomes imperative to ask the question of whether we can build AI without losing control over it.
Renowned neuroscientist and philosopher Sam Harris does a tremendous job at answering the question in his TED Talk. Harris argues that we should be scared of superintelligent AI but not in “some theoretical way.”
Failure of Intuition
Harris starts by talking about a failure of intuition, which he describes as failure to detect a certain kind of danger. “Famine isn’t fun. Death by science fiction, on the other hand, is fun,” Harris says as he implores the audience to think about superintelligent machines.
As a failure of intuition, Harris says the gains that we are making in artificial intelligence could ultimately destroy us. “I think it’s very difficult to see how they won’t destroy us or inspire us to destroy ourselves,” he says.
At ai.nl, we have documented the advancement in the field of AI to a great extent. Harris does not spend his time talking about this advancement. He says that this development of AI has made us “unable to marshal an appropriate emotional response to the dangers that lie ahead.”
Devil’s advocate
As a neuroscientist and philosopher, it shouldn’t come as a surprise that Harris is weighing the impact of AI from every possible avenue. He even seems to be playing devil’s advocate at times. At the TED Talk, he lays out two scenarios related to artificial intelligence.
The first scenario is one where people stop making progress in building intelligent machines. This, Harris says, is possible if there is a global pandemic or a full-scale nuclear war or an asteroid impact. Harris says it’s possible even if Justin Bieber becomes president of the United States and the audience erupts into laughter.
Jokes apart, Harris asserts that for people to stop making progress in building intelligent machines, an event capable of destroying civilisation will have to occur. Since such an event will have to be catastrophic for human life, Harris sees the second scenario to be more plausible.
He describes this second scenario as one where “we continue to improve our intelligent machines year after year.”
Will intelligent machines treat humans like ants?
Harris says this act of improving intelligent machines year after year will lead to machines that are smarter than humans. Once these machines become smarter than us, they will likely begin to improve themselves.
This scenario has been one of the leading critics of artificial intelligence and once voiced by a number of leading experts in the space, including the likes of late Stephen Hawking. Sam Harris explains that this doomsday scenario won’t happen in the form of armies of malicious robots attacking us.
Instead, he sees a possibility where these machines become much more competent than us and lead to divergence. “The concern is really that we will build machines that are so much more competent than we are that the slightest divergence between their goals and our own could destroy us,” he says.
He uses the example of our relationship with ants to explain the possibility. Harris says we sometimes take pains not to harm ants and step over them on the sidewalk. However, if their presence is seen as a conflict then we don’t hesitate to annihilate them.
Harris says, “The concern is that we will one day build machines that, whether they’re conscious or not, could treat us with similar disregard.”
Intelligence and Superintelligence
Intelligence is a matter of information processing in physical systems, Harris says. He further says that we have already built narrow intelligence into our machines and these machines are now performing at a level of superhuman intelligence already. Harris also argues that the rate of progress doesn’t matter and any progress is “enough to get us into the end zone.”
The philosopher in Harris shines when he talks about intelligence. He explains that we [humans] don’t stand on a peak of intelligence, or anywhere near it. He calls this insight crucial and says the whole situation around AI and superintelligence has become precarious, making our intuitions about risk unreliable.
Stop referencing time horizon
One of the common themes in the world of technology is to offer a time horizon for advancement of any technology. Harris says “no one seems to notice that referencing the time horizon is a total non sequitur.”
He finds the things said by AI researchers to make the situation look reassuring to be one of the most frightening things. “Worrying about AI safety is like worrying about overpopulation on Mars,” one researcher has said.
Harris argues that the Silicon Valley version of “don’t worry your pretty little head about it” should be set aside. He references the advancement in technology via devices like the iPhone to show the pace at which technology is moving. He thus suggests that people should stop referencing a time zone to make AI superintelligence in a safe manner.
How to build a superintelligent AI?
At the TED conference, Harris highlights another problem plaguing the conference necessary around artificial intelligence. He cites the deeper problem to be the fact that building superintelligent AI on its own is likely easier than building superintelligent AI and complete the neuroscience necessary to integrate with our minds.
He then makes a bold prediction that the companies and governments behind building AI will want to win the race and hence choose the path that is easier. Almost towards the end of his talk, Harris says he doesn’t “have a solution to this problem” but recommends that most people should think about it.
He adds that we will have only one chance to get the initial conditions right for a superintelligent AI that can make changes to itself.
“The moment we admit that information processing is the source of intelligence, that some appropriate computational system is what the basis of intelligence is, and we admit that we will improve these systems continuously, and we admit that the horizon of cognition very likely far exceeds what we currently know, then we have to admit that we are in the process of building some sort of god. Now would be a good time to make sure it’s a god we can live with,” Harris says.