Another day in the world of AI and the entire community seems shocked to its core. The shock came after a software engineer at Google, Blake Lemoine, announced that he was put on paid administrative leave after raising the alarm that its artificial intelligence has become sentient.
In the world of artificial intelligence (AI), a program becoming sentient is equivalent to it gaining consciousness. In other words, Lemoine claimed that Google’s Language Model for Dialogue Applications (LaMDA) now has a soul.
Is Google LaMDA sentient?
Lemoine is an engineer working for Google’s Responsible AI organisation and was reportedly testing whether its LaMDA model generates discriminatory language or hate speech. His claims are based on his interactions with Google’s Language Model for Dialogue Applications (LaMDA), used for generating artificially intelligent chatbots.
According to a report by the Washington Post, Lemoine began talking to LaMDA and asked questions about rights, personhood, and even some other profound topics. Based on the replies from the language model, Lemoine concluded that the AI generator has become sentient.
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” he told the Post.
In April, he expressed his concern about the AI language model with Google executives in a document titled “Is LaMDA sentient?” and included a transcript of his conversations with the AI.
The Google engineer argues that the responses show “feelings, emotions, and subjective experience” and should be considered sentient. After being placed on leave, Lemoine published the transcript via his Medium account. However, neither Google nor the larger AI community is impressed with his actions. The search giant says its team of ethicists and technologists reviewed the allegations and found “no evidence that LaMDA was sentient (and lots of evidence against it).”
Google argues against Lemoine’s claims
The search giant believes that Lemoine has violated the company’s confidentiality policy with his actions related to work on LaMDA. He reportedly invited a lawyer to represent the AI system and is also said to have spoken to a representative from the House Judiciary committee about alleged unethical activities at Google.
In his Medium post on June 6, Lemoine writes that he sought “a minimal amount of outside consultation to help guide me in my investigations” and confirms having held discussions with US government employees.
An interview LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.https://t.co/uAE454KXRB
— Blake Lemoine (@cajundiscordian) June 11, 2022
To recall, Google first publicly unveiled LaMDA at its developer conference I/O last year. The company said LaMDA is developed to improve Google’s conversational AI assistants and a similar language model also powers Gmail’s Smart Compose feature, or autofill options for search queries.
“Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient,” Google spokesperson Brian Gabriel told WaPo. “These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic.”
“Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has,” Gabriel adds.
All AI programs are designed with the example of the human brain and VR pioneer Jaron Lanier sees a future where there will be several brains working towards a complex intelligence. Google says that the conversations with LaMDA, the large language model technology, always aims to mimic human language.
Most AI experts seem to agree with Google’s assertion that LaMDA is not sentient yet. Those doing research on large language models believe that neither LaMDA nor OpenAI’s GPT-3 are intelligent. Roger Moore argued on Twitter in March that these models should not have been called language models and should instead have been called “word sequence modelling.”
Google AI unit is hit with another controversy
The claim made by Lemoine and Google putting him on a paid leave further erodes the confidence in Google’s AI unit. The Ethical AI team at Google was founded by Margaret Mitchell in 2017 after she left Microsoft Research.
In February 2021, Google fired Mitchell, which came just three months after Ethical AI co-lead Timnit Gebru was also terminated. Google still claims that Gebru resigned from her role.
Google has not publicly revealed the reason for firing leaders at its Ethical AI team. However, it is believed that a paper, co-authored by Gebru and Mitchell, detailing the real-world dangers of large language models, acted as the trigger. Since the firing, the whole AI community has been debating about the impact of large language models.
While large language models have resulted in breakthroughs, they have also been found to be responsible for bias and possible harm. A number of researchers, ethicists, and linguists say that these large language models are overused and not vetted enough.
“Instead of discussing the harms of these companies, the sexism, racism, AI colonialism, centralization of power, white man’s burden (building the good “AGI” to save us while what they do is exploit), spent the whole weekend discussing sentience. Derailing mission accomplished,” Gebru tweeted.
Instead of discussing the harms of these companies, the sexism, racism, AI colonialism, centralization of power, white man’s burden (building the good “AGI” to save us while what they do is exploit), spent the whole weekend discussing sentience. Derailing mission accomplished.
— Timnit Gebru (@timnitGebru) June 13, 2022
In a tweet, Lemoine wrote he plans to continue working on AI in the future “whether Google keeps me on or not.”
The whole saga should have come as a jolt to the ethical AI team at Google, which has failed to escape controversies. The search giant’s inability to contend the issues related to inherent bias and toxicity in building AI systems and the lack of transparency does not help either.