A movie showing that image recognition using a neural network can fool easily
Artificial intelligence for image recognition (AI) with human ability more than "PlaNet ""Analyze images automatically and explain in words"CaptionBot"Furthermore, image recognition technology using a neural network with many state-of-the-art technologies has been applied, ranging from fashionable automatic driving cars. However, the Massachusetts Institute of Technology (MIT) research group points out that this can be easily deceived.
Fooling Neural Networks in the Physical World with 3D Adversalial Objects · labsix
http://www.labsix.org/physical-objects-that-fool-neural-nets/
Classification tools based on neural networks such as image recognition tools and sentence recognition tools will be able to demonstrate performance close to human ability by performing many tasks. However, the neural network used in these cases is particularly vulnerable to "Adversarial Example", and it seems to have the disadvantage that it can make classification mistake if confusing the input thoroughly. This "hostile sample" is said to refer to an artificial sample made to "deceive" a learned neural network.
For details on Adversarial Example, please refer to the blog below for details, so if you are interested in more details please read it.
About Adversarial example - sotetsuk's tech blog
However, in a recognition tool using an actual neural network, the hostile sample seems to be treated not only in real world object recognition but also in tasks that recognize two-dimensional printed matter. Therefore, MIT's Artificial Intelligence Research Group "labsix"We create and publish movies to show that hostile samples capture much bigger problems than we have been thought of so far.
What was released was Google's "3D" printed turtleInception V 3A movie that uses image recognition to make images.
Fooling Neural Networks in the Real World - labsix
On the left side of the screen is shown what "Inception V3" recognizes the object reflected by the camera.
And in the center of the screen is a model of a turtle made with a 3D printer. Inception V3 recognizes this as "mud turtle" and it can be said that the object is recognized correctly.
Even if the model of the turtle made with the 3D printer is projected from various angles, it recognizes that it is "terrapin" "mud turtle" "box turtle" (turtle) and firmly recognizes that it is a turtle You can see that it is done.
However, when I modeled a turtle model made as a hostile sample here, Inception V3 recognized the model as "rifle (rifle)" for some reason.
Even if you see it from anywhere from anywhere it is just a model of a turtle, I do not know what is different from the previous model, but I do not know why Inception V3 misunderstood the model as a rifle I will excuse you.
labsix creates a model of the turtle used in the demonstration using an algorithm that can definitely produce hostile samples. This seems to be able to create three-dimensional objects and printed matter that can deceive the image recognition tool from any angle. Of course it is possible to create arbitrary objects other than turtles, and we have also created baseball balls that are misunderstood as espresso in image recognition tools.
Although it is just a turtle and a baseball game in terms of human eyes, these are made as hostile samples, so the image recognition tool seems to be "rifle swimming underwater" or "espresso fitted in a mitt" And that.
Related Posts: