paint-brush
What Happened With Google’s LaMDa Chatbot?by@devinpartida
2,337 reads
2,337 reads

What Happened With Google’s LaMDa Chatbot?

by Devin PartidaDecember 5th, 2022
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

In June 2022, ex-Google engineer Blake Lemoine made waves in the tech world by saying he believed there was a sentient AI. He had helped train a chatbot called LaMDA (Language Model for Dialogue Applications) Lemoine said he concluded it was a person, but did so in his capacity as an ordained Christian mystic priest, not as a scientist. In a November 2022 panel discussion at the COSM 2022 tech conference, Lemoine participated in a discussion about AI sentience.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - What Happened With Google’s LaMDa Chatbot?
Devin Partida HackerNoon profile picture

People have long wondered if and when artificial intelligence (AI) would become sentient, meaning it can think, perceive and feel. In June 2022, ex-Google engineer Blake Lemoine made waves in the tech world by saying he believed there was a sentient AI. He had helped train it. Here’s a look at that situation and what’s happened since. 

Lemoine Gives an Interview to The Washington Post

Conversations about AI sentience switched into high gear after Lemoine went public with the matter in an article from The Washington Post in the summer of 2022. At that time, he had worked for Google for seven years and had been part of numerous projects related to AI. 

One was LaMDA (Language Model for Dialogue Applications). In a May 2021 blog post, Google researchers said LaMDA could go a step beyond existing chatbots and talk to people about a seemingly endless number of topics with a highly natural conversation flow. 

Lemoine told The Washington Post he concluded LaMDA was a person. But, he did so in his capacity as an ordained Christian mystic priest, not as a scientist. Lemoine confirmed that he makes such judgments after conversing and evaluating the responses. In the case of LaMDA, he ran experiments to try and prove it was a person. 

When the Washington Post reporter engaged with LaMDA at Lemoine’s invitation, some responses were the mechanized sort most of us are well familiar with. But Lemoine said the responses were that way because LaMDA behaved the way it thought the reporter wanted it to act robotically.

Lemoine said the answers would have been different if the reporter had treated the AI like a person. When the reporter followed Lemoine’s tips about how to structure their responses, LaMDA’s responses became fluid. 

Lemoine Wanted Engineers to Study LaMDA Differently

Some people consider Lemoine a whistleblower. That’s because, during the Washington Post piece, he asserted perhaps Google should not be making all the choices about how to use AI. Lemoine said he believed LaMDA’s technology would be amazing and benefit others, but people may think differently and should have a say in how such tools or products get developed. 

Some whistleblowers are eligible for compensation. But, in Lemoine’s case, his goal was to clarify his thoughts about AI sentience rather than seek cash. A few weeks after Google fired him due to this situation, Lemoine gave an interview with Tech Target. 

In it, he clarified he believed LaMDA had its own goals not related to its programming and experienced internal states comparable to emotions. He also noted that something might have some, but not all, characteristics associated with sentience and that people might disagree on what constitutes sentience. Lemoine asserted that his primary intention was to encourage Google to start studying LaMDA using principles of cognitive science and psychology rather than standard AI testing methods.

However, in a July 2022 Business Insider interview, Lemoine took issue with Google’s use of AI.  He said Google just uses AI ethics as a “fig leaf,” so they can say they tried to make a product or system ethical, even after the need to profit wins out. 

Lemoine also said the lack of diversity among Google’s teams makes employees blind to the kinds of problems advanced AI could pose. He gave a troubling example of how LaMDA mentioned fried chicken and waffles when Lemoine asked the bot to give an impression of a Black man from Georgia.

Sentience Will Become Apparent, Lemoine Says

More recently, Lemoine participated in a November 2022 panel discussion at the COSM 2022 tech conference. The question at hand was whether AI could, and may already have sentience. 

During the conversation, Lemoine said he wasn’t trying to convince anyone of AI’s sentience. Rather, he said many of the most advanced AI systems remain in secret labs. Once more people start interacting with them, AI’s sentience will become obvious. 

He also said the training data received by AI algorithms is analogous to the experiences people have in their lifetimes that create opportunities for learning. Further, Lemoine noted that the neural networks many advanced algorithms have are similar to those in human brains. Most people working on AI agree that those similarities are not yet enough to give AI sentience. However, Lemoine certainly seems hopeful about the possibilities. 

User Input Helps Make Chatbots Smarter

As engineers work to improve how chatbots work, they often rely on help from tech enthusiasts. Such input can make the applications perform better. But, as Microsoft learned with its Tay bot, public input can also make chatbots show highly undesirable characteristics.

However, Google’s trying to address that issue by narrowing the pool of people who can interact with its AI products. It recently launched the AI Test Kitchen, an app that lets testers work with AI products in development. 

LaMDA was one of the first products made available to app users, and people could interact with the bot while having it do specific tasks. One of them encouraged people to provide LaMDA with a goal they wanted to achieve. The chatbot would then break it down into subtasks, making the milestone more manageable. 

Will You Help Test LaMDA?

Google representatives asserted that they reviewed Lemoine’s claims of sentience and found them wholly without merit. Additionally, they provided him with substantial evidence of non-sentience. 

No matter which side you’re on, Google’s Test Kitchen app lets you get first-hand experience using AI in development. That could be a worthwhile way to spend time, especially if you’re curious about what the future holds.