What is needed to treat AI as beneficial to humankind?

In an era when artificial intelligence (AI) has become commonplace in a variety of technologies, the book '

New Laws of Robotics: Defending Human Expertise in the ' summarizes what is needed to protect human expertise. Frank Pascal, author of ' Age of AI (New Robot Law: To Protect Human Expertise in the Age of AI ),' makes recommendations for the beneficial use of cutting-edge technologies such as robots and AI for humankind. I am.

Pando: Why we must democratize AI to invest in human prosperity, with Frank Pasquale

Frank Pascal, who works as a law professor at Brooklyn Law School, is said to have authored several books on robots and AI. Mr. Pascal cites the following four basic principles necessary for the correct operation of robots and AI.

1: Robots and AI complement and do not replace professionals
2: Robots and AI must not forge humanity
3: Robots and AI should not be used to intensify the zero-sum arms race
4: Robots and AI must show the identities of their creators, managers and owners

Pascal argues that these basic principles help to treat technology more humanely.

For example, when a robot is used in a hospital, if the robot replaces the medical staff, it tells the patient's relatives that 'the patient is 95% likely to die within 10 days. To end treatment. It is possible that you will receive a ruthless notification such as 'Would you like to consider this option?' As you can see from this example, it is important that robots and AI are complementary rather than replacing professionals, Pascal said.

Of course, there are many cases where robots take the place of humans in tasks such as cleaning and mining. However, in these cases, the robot's opponent is a 'building' or 'soil', and there is no emotion and no need for communication. In other words, Pascal thinks that it is difficult for a robot to take the place of a human in a job where the other person is a human and 'direct and empathic identification is important'.

In addition, Pascal says that the existence of the second basic principle, 'Robots and AI must not forge humankind,' will prevent robots and AI from damaging the norms of communication. 'Some AIs are designed to deceive people as if their creators had human emotions and sensations.' 'As far as we study biology and social evolution, anger, empathy, loss, I think it's fundamentally fraudulent because it turns out that the emotions of all kinds of subjective experiences are based on the embodiment of having a body, 'said AI designed to imitate humans. It points out that it is a mistake to be done.

Some engineers and researchers working at major American IT companies refuse to develop killer robots and their advances. These movements are an opportunity for researchers and engineers to rethink 'how they work and what they work on.'

'Don't make a killer robot' is recommended by Dr. Hawking and more than 1000 researchers --GIGAZINE

by Nathan Rupert

Pascal said, 'When thinking about the role of engineers in major tech companies and their own ethical perspectives, much to bring ethical perspectives to the tech workplace, such as Meredith Whittaker and Timnit Gebble. We need to praise those who have done this. They deserve great praise for telling the truth while resisting power, and to maintain and expand employee independence. We should also consider the fact that there are still many battles taking place within the company, 'he said, praising the ancestors who brought an ethical perspective to the technology industry.

He pointed out that more institutional thinking is needed to enable these people to continue working in the technology industry. For example, if you work in the research department of a technology company, you suggest that you need to isolate workers from certain financial pressures.

In addition, Mr. Gebble named by Mr. Pascal is a person who was a technical leader of the ethical AI team at Google. He will be fired for pointing out that there are ethical issues with the AI language model used by Google. In addition, more than 1,200 Google employees have signed a letter protesting Mr. Gebble's dismissal.

Google's dismissal of researchers on the ethical AI team has been blamed for 'unprecedented censorship,' with more than 1,200 Google employees signing protests-GIGAZINE

by Thomas Hawk

In addition, Pascal said, 'If software engineers had an association of professionals as active as lawyers and doctors, they set the standard for independence and employers like Google went in an ethically questionable direction. In doing so, we can protect workers throughout the industry. '

Also, while doctors can lose their doctor's license if they take medications that are aggressive enough to harm the patient and seek the interests of the drug company, software engineers use dark patterns and fraudulent acts. He pointed out that he would not lose his license if he did. For unethical projects, building a framework that tells employers, 'If you do this, your license will expire and you can't follow it,' protects engineers working in the industry and is unethical. Pascal says that it will also prevent the development of ethical robots and AI.

In addition, Pascal commented on the basic principles of robots and AI: 'This kind of coercion is important and must be stipulated by law, because if not stipulated by law, employers refuse. All you have to do is dismiss an employee and find a worker who is willing to do it instead. If only some people on the scale of 20, 50, or 1000 think correctly about robots and AI, Even if they are in conflict with a company, without legal backing, they are at risk of being dismissed, 'he said, protecting engineers and researchers with the right ethics and inhumane. He emphasizes the importance of strict legislation so as not to produce robots and AI.

in Science, Posted by logu_ii