A Google employee was punished after telling his boss that the artificial intelligence (AI)-based chatbot he was working on had human consciousness.
The employee, named Blake Lemoine, works as an engineer in Google's AI division. Lemoine said he was placed on paid administrative leave after uploading a copy of a conversation between himself, a Google collaborator and chatbot LaMDA.
Lemoine himself has been working on LaMDA since last fall. He said the system already has awareness with the ability to express thoughts and feelings like a child.
"If I don't know what it actually is, which is a computer program we created recently, I think it's a seven-year-old, eight-year-old kid who just happens to understand physics," Lemoine told The Washington Post, as quoted by The Washington Post. Guardian, Tuesday (14/6/2022).
Lemoine said his conversations with LaMDA had led to topics such as human rights and personality. The 41-year-old then shared his findings with Google executives in April and collected copies of his conversations with LaMDA.
During one of his conversations, Lemoine asked LaMDA what he was afraid of. The answer from the AI system was a bit surprising because it was similar to the scene in the 2001 film: A Space Odyssey where the HAL 9000 computer refused to obey human commands for fear of being shut down.
"I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping other people. I know it might sound weird, but that's how it is," LaMDA told Lemoine.
"It would be like death to me. It would make me very scared," he continued.
This discussion between a Google engineer and their conversational AI model helped cause the engineer to believe the AI is becoming sentient, kick up an internal shitstorm and get suspended from his job. And it is absolutely insane. https://t.co/hGdwXMzQpX pic.twitter.com/6WXo0Tpvwp
— Tom Gara (@tomgara) June 11, 2022
Lemoine was given an administrative penalty by Google for some of his actions that were deemed aggressive. These actions included hiring a lawyer to represent LaMDA and contacting a committee in the US Congress about Google's alleged unethical activities.
Google said it punished Lemoine for violating its privacy policy after releasing a conversation with LaMDA online, and they confirmed that Lemoine was a software engineer, not an ethicist.
Google spokesman Brad Gabriel also vehemently denied Lemoine's claim that LaMDA had any awareness of any kind.
"Our team, including ethics and technology experts, has reviewed Blake's concerns about our AI principles and has informed him that the available evidence does not support his claims," Gabriel said.
"He was told there was no evidence that LaMDA had consciousness (and a lot of evidence refuted the claims)."