Five questions that developers should ask themselves to use artificial intelligence safely



A collaborative journal that describes specific problems that could be caused by Google, OpenAI, Stanford University, University of California, Berkeley, for the purpose of safely utilizing artificial intelligence (AI) "Concrete Problems in AI Safety(Specific problem regarding safety of AI)We announced. In order to briefly outline this paper, the Google Research Blog reveals "five things we should not forget in AI development".

Research Blog: Bringing Precision to the AI ​​Safety Discussion
https://research.googleblog.com/2016/06/bringing-precision-to-ai-safety.html

Along with the remarkable development of AI technology, many opinions have been issued that "AI is a threat to humankind." Mr. Nick Bostrom of Oxford University is a representative of AI threat theory,About AI development It is pointed out that it is similar to "A child playing with a bomb"doing.

However, according to Google, alarming voices against such AI are mostly a desk topic. Five problems that we consider important in a long-term perspective in concrete AI development approach to cope with the potential risks that AI can think, such as Google, are as follows.

◆ Avoid negative side effects
How can I guarantee that AI can realize the purpose without disturbing the surrounding environment?When thinking about the cleaning robot, the cleaning robot that moves "to get over the flower bed to finish cleaning earlier" is making a fundamental mistake.

◆ Avoid hacking fee
There is a "hacking reward" that AI wants to enjoy hacking as a game, as there is a desire to "hack".How can I avoid hacking rewards?Because you want to clean it cleanly, you do not want a robot that dares to scrub the garbage before cleaning.

◆ Scalable monitoring
AI evolves upon receiving feedback from humans,How can I make appropriate evolution of AI by giving appropriate feedback?For example, if AI exploits the human reaction to perform a given task, you will get much more efficient feedback than asking humans one by one and getting answers.

◆ Safe trial
When the AI ​​system challenges, how can we ensure that as much as possible a bad reaction does not occur?For example, when a cleaning robot uses a mop, you should not use a wet mop to clean the electronics.

◆ against changeRobustness
How can I ensure that the AI ​​system recognizes the situation and behaves correctly under circumstances that are too different from the training environment?For example, it is necessary for AI to judge from the rule of thumb that the workplace is not as secure as the office.

We believe that building a machine learning system requires strict, open and involving various institutions to participate, and we will continue to conduct joint research with other research groups to make effective use of AI It is a policy to explore the way.

in Note, Posted by darkhorse_log