What’s the biggest threat to humanity? Ethereum creator Vitalik Buterin is calling unfriendly AI a bigger threat to humanity than World War 3. With this statement, Buterin joins the likes of Elon Musk and late physicist Stephen Hawking as the people sounding the alarm against unfriendly AI.
One of the goals facing AI researchers is building artificial general intelligence (AGI), an ability where a program can understand or learn any intellectual task that a human being is capable of. However, a number of prominent tech leaders have voiced their concern around building artificial general intelligence. Buterin is now joining an elite list.
Vitalik Buterin cautions against unfriendly AI
Unfriendly-AI risk continues to be probably the biggest thing that could seriously derail humanity’s ascent to the stars over the next 1-2 centuries. Highly recommend more eyes on this problem. https://t.co/G248XzRFaD
— 豚林 vitalik.eth (@VitalikButerin) June 8, 2022
Last week, Buterin shared a paper by AI theorist and writer Eliezer Yudkowsky and asked for more eyes on the problem of “unfriendly AI risk.” In the paper, Yudkowsky argues that AGI could lead to “an existential catastrophe” and that the current research community is not succeeding at “preventing this from happening.”
Buterin essentially agrees with Yudkowsky’s assertion and added that unfriendly AI “could seriously derail humanity’s ascent to the stars over the next 1-2 centuries.”
When a Twitter follower replied to Buterin that World War 3 is a bigger concern right now, Buterin immediately disagreed. The creator of Ethereum estimates that World War 3 will kill between one and two billion people and that too from food supply chain disruption. He believes that World War 3 will not kill off humanity.
Nah, WW3 may kill 1-2b (mostly from food supply chain disruption) if it’s really bad, it won’t kill off humanity. A bad AI could truly kill off humanity for good.
— 豚林 vitalik.eth (@VitalikButerin) June 8, 2022
“A bad AI could truly kill off humanity for good,” Buterin said in reply.
The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence, is a non-profit research institute founded by Yudkowsky in 2000. Since 2005, the institute has been focussing on identifying and managing potential existential risks from artificial general intelligence (AGI).
In 2018, Vitalik Buterin became the third largest donor in MIRI’s history when he donated nearly $7,64,000 in the form of Ethereum. The grant was meant to help the organisation further study the impact of AI.
Buterin’s contradictory position on AI
When Vitalik Buterin made his donation official in the form of Ethereum to MIRI, there was a lot of scepticism around him funding the MIRI research. Buterin created Ethereum as a blockchain technology that works on the concept of smart contract technology.
Some leading scholars and investors have taken to calling this similar to AI. This is primarily because smart contract technology does not require real maintenance. With support for decentralised autonomous organisations, Ethereum aims to build a solution that seems identical to AI in general terms.
Where Ethereum aligns closely with AI is in the fact that it learns automatically instead of being programmed using a coding language. The impact of AI on humanity and existential safety has been of personal interest to Buterin.
There is even a Vitalik Buterin PhD Fellowship in AI Existential Safety offered by Future of Life Institute. The programme funds students for five years of their PhD with annual funding covering tuition, fees, and the stipend of the student’s PhD programme up to $40,000 at universities in the US, UK, or Canada.
Rise of AI and the fear associated with it
Vitalik Buterin’s call for more research in the field of unfriendly AI shows how the Ethereum creator sees research as the way to lead us towards development of friendly AI. Buterin is primarily looking at long-term benefits of AI and the societal impact caused by the technology.
However, we have seen Issac Assimov predict AI going rogue while physicist Stephen Hawking says AI could wipe out the human race. Microsoft co-founder Bill Gates also expressed his concern around super intelligence knowing that AI will completely transform the world as we know it.
“First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern,” Gates said.
Elon Musk, Tesla CEO and world’s richest person, has even called for regulatory oversight on use of AI. “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that,” Musk said despite being at the forefront of building AI-powered technology like self-driving vehicles and intelligent robots.
Virtual reality pioneer Jaron Lanier is concerned about the AI development process. He once said that AI is being built through the example of the human brain. He sees a future where there will be several brains working towards a complex intelligence. “We don’t yet understand how brains work, so we can’t build one,” he says.
The criticism and fear around AI and it becoming as good as humans is a genuine concern. We have already seen how AI could lock people in their own cars if programmed to do so. If AI is allowed to handle bigger applications with societal impact then it could cause bigger catastrophe. The only solution is to build AI with ethical practice in place and have regulatory frameworks mentioned by Musk.