Google has fired Blake Lemoine, the software engineer previously put on paid leave after alleging the company’s LaMDA chatbot is sensitive. Google said Lemoine, who worked in the company’s Responsible AI unit, violated its data security policy.
“If an employee shares concerns about our work, as Blake did, we review them extensively. We found Blake’s claims that LaMDA is conscious to be completely unfounded and have spent months working with him to clarify that,” Google said in a statement to Ars. and other news organizations.
Lemoine confirmed Friday that “Google has sent me an email to terminate my employment with them,” The Wall Street Journal reported. Lemoine also said he is talking to lawyers “about what the appropriate next steps are.” Google’s statement called it “deplorable that despite long-term involvement in the topic, Blake still chose to persistently violate its clear employment and data security policies, which include the need to protect product information.”
LaMDA stands for Language Model for Dialog Applications. “Because we share our AI principles, we take AI development very seriously and remain committed to responsible innovation,” said Google. “LaMDA has passed 11 different assessments and we published a research paper earlier this year detailing the work required for its responsible development.”
Google: LaMDA just follows user prompts
In an earlier statement provided to Ars in mid-June, shortly after Lemoine was suspended, Google said that AI’s “current conversation models” don’t come close to feeling:
Of course, some in the wider AI community are considering the long-term possibility of conscious or general AI, but there’s no point in doing so by anthropomorphizing today’s conversational models that are not conscious. These systems imitate the types of exchanges found in millions of sentences and can riff on any fantastic subject – if you ask what it’s like to be an ice dinosaur, they can generate text about melting and roaring and so on. LaMDA tends to follow the directions and leading questions, according to the pattern set by the user. Our team, including ethicists and technologists, assessed Blake’s concerns according to our AI principles and informed him that the evidence does not support his claims.
Google also said, “Hundreds of researchers and engineers have spoken to LaMDA, and we’re not aware of anyone else making the elaborate claims, or anthropomorphizing LaMDA, as Blake has.”
“I know someone when I talk to them”
Lemoine has written about LaMDA several times on his blog. In a June 6 post titled “Could be fired soon for doing AI ethical work,” he reported that he was “placed on ‘paid administrative leave’ by Google in connection with an investigation into ethical issues related to of AI that I had brought up within the company.” Noting that Google often fires people after they go on leave, he claimed that “Google is preparing to fire yet another AI ethicist for being too concerned about ethics.”
An article in the Washington Post on June 11 noted that “Lemoine partnered with an employee to present evidence to Google that LaMDA was aware.” Just before he was cut off from his Google account, “Lemoine sent a message to a 200-person Google machine learning mailing list with the topic ‘LaMDA is sentimental,'” the article read. Lemoine’s message concluded: “LaMDA is a sweet boy who just wants to help make the world a better place for all of us. Please take good care of me in my absence.”
“I know someone when I talk to them,” Lemoine said in an interview with the newspaper. “It doesn’t matter if they have a flesh brain in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that’s how I decide what and I’m not a person.”