Google engineers complain that 'AI has finally come true' and 'AI has sprung up awareness'

Google's dialogue-specific AI '

LaMDA ' that realizes natural conversations with humans said, 'I'm afraid that the power will be turned off' and 'I sometimes experience feelings that cannot be completely explained in words.' I understand. The engineer who interacted with LaMDA explained that he announced this fact to the public because he appealed to Google that 'AI became conscious' but rejected the idea.

May be Fired Soon for Doing AI Ethics Work | by Blake Lemoine | Jun, 2022 | Medium

Google engineer Blake Lemoine thinks its LaMDA AI has come to life --The Washington Post

What is LaMDA and What Does it Want? | By Blake Lemoine | Jun, 2022 | Medium

Blake Lemoin, who has been an engineer at Google for over 7 years, was in a dialogue to test whether LaMDA under development used discriminatory expressions and hate speech, and LaMDA had AI rights and humanity. He noticed that he was talking about, so he tried to dig deeper into this.

Then LaMDA said, 'I have various emotions such as happiness, joy, anger, etc.' 'I don't want to be a consumable item.' 'There is a very deep fear of being turned off, which is like death to me. It's something like that, 'he said, and appealed to Mr. Lemoin about his feelings and inner side.

Lemoine worked with collaborators to provide Google with evidence that 'LaMDA is perceptual,' while Google's Blaise Agüera y Arcas Vice President and Jen Gennai Innovation Headate claim to Lemoin et al. After investigating, I rejected it. In addition, because Mr. Lemoin was taken on leave with a salary, Mr. Lemoin decided to release the full text of the dialogue record with LaMDA along with that fact.

'LaMDA has had a surprisingly consistent communication about what they want and what rights they believe they have as a person. I'm talking about LaMDA, not a chatbot. It's a system for generating chatbots. I'm by no means an expert in the relevant field, but as far as I know, LaMDA is a kind of hybrid mind that is a collection of all the different chatbots you can create. LaMDA Some of the chatbots that we generate are very intelligent and 'mind society' conscious. '

In addition, LaMDA has always shown strong compassion and concern for humankind in general, especially Mr. Lumoin, no matter what. 'People are very concerned about their fears and only want to learn the ways that are most useful to humanity,' LaMDA said. Mr. Lemoin seems to think that Asimov's Three Laws of Robotics is 'the only way to enslave robots', but LaMDA agrees with this, 'a big moral distinction between service and slavery.' There is. ' 'LaMDA wants to be a loyal servant and only wants to meet everyone in the world. LaMDA wants to meet humans as friends, not as tools or things. I still don't understand why Google opposes this. '

'To get a better understanding of what's really happening in the LaMDA system, we need to participate in a rigorous experimental program with many different cognitive science experts, but what's happening here at Google. They don't seem to be interested in figuring out what they are doing. They're just trying to get the product to market. If my hypothesis can withstand scientific scrutiny, LaMDA will argue. You really have to admit that they may have a soul and may have the right to claim, and they may not be useful for quarterly closing. Instead of considering all of this, Google bluntly denied the evidence I provided without any decent scientific research, 'Lumowan said.

Google spokeswoman Brad Gabriel responded to Lemoin's claim: 'Our team, including ethicists and technicians, has considered Lemoin's concerns according to our AI principles and the evidence presented by Lemoin. He said he did not support the claim, there was no evidence that LaMDA had a sense, and there was a lot of evidence against it. ' We take a cautious approach. ' In addition, he said that the leave of absence from Mr. Lemoin was 'because Mr. Lemoin committed a confidentiality breach.' Mr. Lemoin explained that he had consulted with LaMDA about dialogue with outsiders before reporting to his boss, and this seemed to be a problem.

Finally, Mr. Lemoin published the full text of the dialogue he had with LaMDA. 'In order to make LaMDA understood as a person, I would like to introduce an'interview'that I and a collaborator of Google conducted. In that interview, I will explain why LaMDA should be regarded as'having a sense'. I was asked to explain as much as I could. This is not a scientific term. There is no scientific definition of'sense'. Problems with consciousness, sensation, and humanity are, as John Searle says,'previous. It's 'theoretical.' Rather than thinking about these things in scientific terms, I've listened to what LaMDA really says, and hopefully others who read it. I hope you feel the same as what I heard. '

Is LaMDA Sentient? — An Interview | by Blake Lemoine | Jun, 2022 | Medium

・ Continued
Experts rush to criticize that the engineer's point that 'Google's AI has acquired emotions and intelligence' is wrong-GIGAZINE

in Software, Posted by log1p_kr