Artificial Intelligence has reached a point where people have come to clearly make their own opinion about its impact. For many, AI is that revolutionary technology that will bring forth radical evolution of our society. But there are also many who question the promise of AI and reason about the perils.
Max Erik Tegmark, a Swedish-American physicist, cosmologist, and machine learning researcher, is one of the prominent voices asking all the right questions related to AI. As president of the Future of Life Institute and a professor at the Massachusetts Institute of Technology, Tegmark knows more about AI and its possibilities than most of us do.
Lately, Tegmark has been voicing concerns about AI safety and his questions could be the ones stopping us from reaching a doomsday scenario. On an 80,000 Hours podcast with Rob Wiblin, Tegmark speaks about why AI remains top of his mind and how the recent AI developments have impacted his thinking.
Who is Max Tegmark?
Born in 1967, Max Tegmark is a Swedish-American physicist specialising in the field of cosmology research at MIT. He is also a machine learning researcher but his interests are not limited to cosmology or AI. In 2014, Tegmark published the best-selling book “Our Mathematical Universe: My Quest for the Ultimate Nature of Reality” as a nonfiction book looking at some of the recent developments in the field of astrophysics and quantum theory.
In 2017, he followed up with another best-selling book titled “Life 3.0: Being Human in the Age of Artificial Intelligence.” The book is a must read in the field of AI and it looks at the impact of AI on the future of life on Earth and beyond.
The book also aims to study the societal implications of artificial intelligence. Through this book, Tegmark became one of the prominent voices in the world of people arguing for defining AI safety. His book discusses what can be done to maximise the chances of a positive outcome and looks at potential future for technology, humanity, and the combination of both.
Tegmark has received research grants from Elon Musk to investigate the existential risk from advanced AI. He founded a non-profit called the Future of Life Institute working to reduce “the threats to humanity’s future including nuclear war, synthetic biology, and AI.”
Over the years, Max Tegmark has complained about a lot of things, including the impact of social media algorithms on our news consumption and use of killer robots. However, in 2015, he made a New Year’s resolution called “put up or shut up” that led to the first Puerto Rico conference and Open Letter on Artificial Intelligence.
Intelligence is not mysterious
On the podcast, Max Tegmark reveals how it is a common misconception among nonscientists that intelligence is “something mysterious” and can only exist inside biological organisms like human beings.
Tegmark argues that intelligence is about information processing and it doesn’t matter whether “the information is processed by carbon atoms in neurons, in brains, in people, or by silicon atoms in some GPU somewhere.”
He also adds that this information processing has been the core idea behind the AI revolution. Tegmark is also of the view that people often underestimate the future progress possible with AI. He believes it is not necessary for us to figure out how intelligence works before we can build machines that are smarter than us.
Ensuring AI benefits us all
Before speaking about artificial general intelligence, Tegmark talks about the possibility of building AI that benefits society. He says with AI, you can either build services and goods that can be “shared so everybody gets better off” or cause a situation where there is “incredible power concentration.”
He argues before we see the phase where the machine takes over, we need to contemplate the current situation where the wealth and power is concentrated in the hands of few.
“If we believe in the democratic ideal, the solution is obviously to figure out a way of making this ever-growing power that comes from having this tech be in the hands of people of Earth, so that everybody gets better off,” Tegmark says.
Even though AI has been associated with a distant and dystopian future for our society, Tegmark says there is a need for the society to look at a positive future. “I also think it’s important that this job of articulating and inspiring positive vision is not something we can just delegate to tech nerds, like me,” he adds.
He says it would have been impossible to predict the advancements in machine learning as recently as seven years ago. Tegmark talks about AI architecture like “transformers” responsible for major advancement in the way AI trains and processes information to deliver inference.
Need for regulation
One of the common things to note right now is how major tech companies are asking for regulation with AI themselves. Regulation is generally bad for companies but with AI, there is a major demand from tech companies for a framework. Tegmark says this is not new.
He says tobacco and oil companies also asked for regulation but they were so powerful that they successfully pulled off a regulatory capture. He worries that it could repeat with tech companies as well. “Whenever the regulator becomes smaller or has less money or power than the one that they’re supposed to regulate, you have a potential problem like this,” Tegmark says.
Tegmark does not seem worried about AGI but he is instead worried about the possibility of big tech companies to completely capture the power. He says even at big AI conferences, there are people who forget to mention the grants they received from big tech companies. Tegmark says he is worried by the “capture of academics” in the AI community by the tech companies.