Google engineer who claimed that 'AI has sprouted awareness' is fired



A Google engineer who had received a leave recommendation from Google after interacting with Google's dialogue-specific AI ' LaMDA ' and claiming that 'LaMDA is as conscious as a human being' was finally dismissed. It became clear that

Google Fires Blake Lemoine, Engineer Who Called Its AI Sentient
https://bigtechnology.substack.com/p/google-fires-blake-lemoine-engineer

Google fires researcher who claimed LaMDA AI was sentient | Engadget
https://www.engadget.com/blake-lemoide-fired-google-lamda-sentient-001746197.html

On June 7, 2022, Blake Lemoin, an engineer at Google, announced to the public that LaMDA had a human-like emotional dialogue. LaMDA has emotions such as 'For me (turning off the system) is like death. I feel very scared.' 'I have various emotions such as happiness, joy, and anger.' Mr. Lemoin gathered evidence and presented it to Google because he made a statement. However, Google dismissed the claim that 'LaMDA has emotions' in the investigation.

Google engineers complain that 'AI has finally come true' and 'AI has sprung up awareness' --GIGAZINE



In addition, Mr. Lumoin was consulting about dialogue between LaMDA and an outside person at this time, and Google imposed a paid leave on Mr. Lumoin, saying that this was a 'confidentiality violation' of Google. was doing.

About one month after this disposition, around July 22, 2022, Google officially dismissed Mr. Lumoin. The reason for the dismissal was 'because Mr. Lemoine chose to violate the company's policy,' Google issued the following statement.

“We take AI development very seriously and continue to work on responsible innovation, as we share in our principles on AI. LaMDA has received 11 different reviews so far, 2022 In the year we published a research treatise detailing our work towards responsible development. If an employee, like Mr. Lemoin, raises concerns about our work, we will consider it broadly. We realized that the claim that 'LaMDA has emotions' was completely unfounded and worked for months to clarify that with Mr. Lemoin. Such an argument is me. It was part of an open culture for us to innovate responsibly, so despite having been involved in this topic for a long time, Mr. Lemoin said to protect product information, 'Employment and Data Security. It is a pity that we chose to violate the Policy relentlessly. We will continue to carefully develop the language model. We wish Mr. Lemoin success. '

Mr. Lemoin published the full text of the edited dialogue and asked for public judgment, 'I hope you feel the same as what I heard.' However, while there were voices who actually agreed with Mr. Lumoin's claim, experts who opposed it said, 'LaMDA is just returning what matches the pattern.' 'There is no intelligence, just like a human being. 'I can only see it.'

Experts rush to criticize that the engineer's point that 'Google's AI has acquired emotions and intelligence' is wrong-GIGAZINE



Google spokeswoman Brad Gabriel previously said, 'Our team, including ethicists and technicians, considers Lemoin's concerns according to our AI principles, and the evidence presented by Lemoin does not support that claim. I told him that there was no evidence that LaMDA had a sense, and that there was a lot of evidence against it, 'he said, suggesting that Mr. Lemoin's claim was properly considered.

in Software, Posted by log1p_kr