Technology to let AI evaluate the reliability of AI is developed

Artificial intelligence (AI) is used in a variety of situations, such as

detecting new coronavirus infections from coughing sounds and generating text at the same level as humans . A new technology has been developed that allows AI itself to judge whether the calculation results derived from such AI are correct.

Deep Evidential Regression

A neural network learns when it should not be trusted | MIT News | Massachusetts Institute of Technology

Artificial Intelligence Is Now Smart Enough to Know When It Can't Be Trusted

Since AI is also applied to life-threatening fields such as self-driving cars, it is very important to evaluate the reliability of AI calculation results. Therefore, Alexander Amini of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) devised a system that allows AI itself to evaluate the reliability of AI calculation results. The reliability evaluation system developed by Mr. Amini is named ' Deep Evidential Regression ', and he said, 'We aimed to make AI itself aware that AI is making the wrong decision. 1% The goal was to detect these mistakes efficiently and reliably. '

There have been systems that evaluate the reliability of AI, and there have been cases where AI is used in the systems. However, every existing system requires a huge amount of time to evaluate its reliability. On the other hand, 'Deep Evidential Regression' can evaluate the reliability at high speed, and the reliability can be evaluated at the same time as the calculation result of AI.

In the research stage test of 'Deep Evidential Regression', assuming that it will be used for autonomous driving technology, AI that judges the depth of the image was used, and the reliability judgment result was output at the same time as the calculation result. The AI used in the research succeeded in determining the depth with the same accuracy as the existing AI. Furthermore, when AI made a mistake in the calculation result of depth, at the same time, AI output that the calculation result was 'the least reliable'. In other words, AI has succeeded in evaluating calculation results and reliability at the same time with high accuracy.

Furthermore, when I trained a depth recognition AI equipped with 'Deep Evidential Regression' with indoor images and used it outdoors, the AI warned that 'the learned data cannot be applied'. This means that AI has recognized a state that is 'unusual', so it is possible to use this technology for 'discovering abnormalities in the medical field'.

CSAIL computer scientist Daniel Russ said, 'Deep Evidential Regression can be applied to a variety of AIs, and when using that AI by using it to evaluate AI using a learning model and assessing its reliability. It may be possible to reveal the amount of errors that occur in the computer and the missing data. '

'AI is also widely used in fields that affect human life, such as medical practice and autopilot of cars, so it is important for AI itself to evaluate the reliability of its calculation results,' said Amini. It was.

in Science, Posted by log1o_hf