DeepMind, a British artificial intelligence subsidiary of Google parent Alphabet Inc, is one of the few companies building general purpose AI. Since Lee Sedol lost a high profile Go match against a computer program designed by DeepMind in 2016, the company had imprinted its name in the AI industry.
However, the AlphaGo program was an inkling of DeepMind’s ambition and the Alphabet subsidiary is still building the fundamental blocks of AI. When these fundamental blocks are put together, DeepMind believes it will be able to develop general purpose AI.
DeepMind was founded by Demis Hassabis, Shane Legg, and Mustafa Suleyman in September 2010. Facebook first tried to acquire the AI startup but Google emerged successful and the company became an Alphabet subsidiary. Hassabis, now the CEO and co-founder, explains his vision for the company and AI in a Lex Fridman podcast.
Moving away from Turing Test
“Turing is one of my all-time heroes,” Hassabis tells Fridman and then adds that the original paper published by Alan Turing in 1950 did not address it to be a formal test. He says the paper is written like a thought experiment and does not specify the knowledge that an expert or a judge might have to make it a formal test.
Hassabis speaking from the contest of an AI becoming the conductor of an interview or a test. He says the Turing Test differs widely from an expert AI person interrogating a machine and knowing how it was built.
“We should probably move away from that as a formal test and move more towards a general test,” says Demis Hassabis. He says the general test will be defined to test “AI capabilities on a range of tasks and see if it reaches human level or performs above on thousand or millions of tasks.”
Living in a Simulation
Elon Musk, often considered to be one of the profound minds and scholars in the field of technology, has repeatedly said that we are all living in a simulation. Hassabis says he does not believe in simulation theory first proposed by Nick Bostrom but he instead sees humans as the best way to understand physics and the universe.
Hassabis argues that there is a need to see information as the most fundamental unit of reality rather than matter or energy. “Information may be the most fundamental way to describe the universe,” he says while explaining how a physicist might see matter and energy as the fundamental unit.
Hassabis adds, “I am not a subscriber to the idea that you know these are sort of throwing away billions of simulations around.”
For AI researchers, developers, and engineers, one of the biggest challenges has been defining our current world and whether it is just another simulation. Hassabis does not agree with the idea of treating the universe as a computer processing and modifying information but he does argue that understanding physics in terms of information theory would be the best way to understand our existence.
Proteins are essential to all life and every function in our body depends on proteins. Hassabis says if you research proteins then they are essentially bio nano machines. He explains how proteins are specified by their genetic sequence called the amino acid sequence.
Hassabis also explains how these proteins fold up into a 3D structure and tells Lex Fridman to imagine it as a string of beads that folds into a ball. He says it is important to understand this 3D structure since this 3D structure of a protein determines what function it performs in our body.
After studying protein structure and molecular biology for years, Hassabis says it all boiled down to whether one can compute the 3D structure from simply the amino acid sequence, which is a one-dimensional string of letters.
Hassabis explains that this correlation between amino acid sequence and 3D structure of a protein has been painstakingly experimented but only when AlphaFold2 came it became possible to predict the 3D structure in a matter of seconds.
When asked by Lex Fridman, an AI researcher working on autonomous vehicles, human-robot interaction, and machine learning at MIT, Hassabis says solving intelligence takes the ratio of science, engineering, hardware compute infrastructure, software compute infrastructure, and of course, the human infrastructure.
He says this ratio keeps changing over time. Hassabis explains how AI was not really an interesting topic when DeepMind was started in 2010. However, in just 12 short years, AI has become one of the most transformational technologies of our human society.
For solving intelligence, Hassabis feels as AI reaches closer to artificial general intelligence (AGI), engineering is even more important. He argues that large models, which are prominent right now, will become unnecessary and won’t be a sufficient part of an AGI solution.
He feels the necessity to stick by ideas like reinforcement learning and says, organisations need to encourage invention and innovation as the multi-disciplinary organisation.
Does the AGI system need to have consciousness to be truly intelligent? This question from Lex Fridman comes at the very end of the nearly 131 minutes conversation but is the most important of their entire conversation.
Hassabis answers that consciousness and intelligence are double dissociable, meaning you can have one without the other. He then gives examples of AI systems that are smart at playing chess or Go but don’t feel conscious in any form or shape. Speaking of the future, Hassabis says there will be some that are smart at certain things but won’t have any semblance of self-awareness.
“I would advocate if there’s a choice building systems in the first place, AI systems that are not conscious to begin with are just tools until we understand them better,” says Hassabis.