Microsoft unveiled a free tool set that allows drone and robot to learn autonomous behavior in virtual space

Microsoft has released a beta version of a tool set to educate AI (artificial intelligence) such as drones and robots and to achieve autonomous safe operation. The feature of this tool set is that it is possible to proceed with learning without using actual drone or machine by simulating in the virtual space of the computer.

Microsoft shares open source system for training drones, other gadgets to move safely on their own - Next at Microsoft

Microsoft lets you crash drones and robots in its new real world simulator - The Verge

In developing technologies to make machines such as drone and automobiles autonomous flight / run, it is necessary to make AI learn correct judgment and operation by repeatedly experiencing various situations. However, repeating the test in real space using actual drones and cars will cost a great deal every time the drone crashes or when the automobile collides with obstacles, damaging things and people There is always a risk that it will be.

Microsoft released a beta version of a toolset that enables machine learning that significantly reduces costs by eliminating such risks on GitHub February 15, 2017.

GitHub - Microsoft / AirSim: Open source simulator based on Unreal Engine for autonomous vehicles from Microsoft AI & amp; Research

The following movie shows how you actually use this toolset to let Drone learn autonomous flight.

AirSim Demo - YouTube

Drone in the state to take off from now. Three sub-screens are displayed at the bottom of the screen, and images of the drones' sensors and cameras are captured. However, it is important to note that these are all simulated. The screen on the far right is the image (which is supposed to be captured) by the drone camera.

And drone took off. Along with the altitude of the drone, the display of the sub screen also changes in real time. The screen on the leftmost is the screen showing the distance to the object acquired by the depth sensor by shade of color.

The middle screen is the screen after "Segmentation" that divides the object reflected on the camera by element. From the appearance of utility poles, buildings, roads and trees being color-coded, you can see how AI is performing the process to recognize the situation.

Physical simulation is also performed in this simulation, and it seems that the dragon's propeller's rotating action etc. are calculated.

The sunlight passing through the leaves of the trees and the shadow falling on the ground are reproduced in real. This is not just to make the screen look nice, but to reproduce the environment that is unchanged from reality, it is necessary to make learning more correct AI.

When colliding with an obstacle such as a standing tree, of course, "hit judgment" will be carried out. This lets you learn that AI's decisions and operations were not appropriate.

To create a more realistic environment, up to one leaf is realistically reproduced in CG.

The tool set includes an API for retrieving data and drunning operations, and it has tools to repeat the test with a trial & amp; error.

All operations and results are recorded and can be used for deep learning.

About this tool, Ashish Kapoor, who led the project with a Microsoft researcher, said in the interview with The Verge that "If you use this tool, do a lot of experiments, even if you fail, keep it to a much lower cost than real society In the real world, testing all possible situations is extremely difficult, but this simulation is a luxurious environment where you can test a number of situations. "

In addition, it is said that fields that are expected to utilize tools are not limited to the field of autonomous flight / driving. Kapoor suggests that this tool may support computerized vision systems and data-driven machine learning.

in Software, Posted by darkhorse_log