NVIDIA releases the way to create a virtual world of 3DCG from real video on AI, the cost of 3D environment construction can be greatly cut


NVIDIA succeeded in developing the world's first AI that can "build a virtual environment of 3DCG from real world images". The virtual environment of 3DCG generated by AI can also be used for training of automated driving cars, games and VRs, so that the time and cost required for constructing the 3D environment can be dramatically reduced compared with the conventional method .

NVIDIA Invents AI Interactive Graphics - NVIDIA Developer News Center NVIDIA Developer News Center
https://news.developer.nvidia.com/nvidia-invents-ai-interactive-graphics/?ncid=so-you-ndrhrhn1-66582

You can see what kind of 3D environment NVIDIA's AI can create from the real world image with the following movie.

Research at NVIDIA: The First Interactive AI Rendered Virtual World - YouTube


"This is the virtual world that the world's first AI rendered interactively"



It appeared after the letter "A person who is playing a racing game."



The image displayed on this display is not generated using a graphic engine, but AI is "3DCG created by converting real world video".



Ting - Chun Wang of NVIDIA has developed AI to train the neural network with the real video using the conditional generation neural network and render the new 3D environment. Research papers on AI (PDF) are also published.



"What if we were able to train an AI model that could create a new world based on a real world movie?"



It was AI developed by NVIDIA that made such an ideal a reality. From the real world movie, AI can create a virtual world of 3DCG.



"It is the first attempt to combine machine learning and computer graphics using deep networks," says Ming-Yu Liu, NVIDIA researcher who worked on AI development with Mr. Wang .



"Let the neural network learn so that the urban environment can be rendered using images of real city"



As data for AI learning, researchers use images running in urban areas by car.



Furthermore, by using another segmentation network, it is devised to make high-level semantic extraction possible from a series of images.



What it means is that by coloring the images using Unreal Engine 4, the objects in the video are classified into layers according to the type and different colors are applied. For example, in the case of the following images, the road is white, the car is yellow, the building is blue-green, and you can see that the boundary between the object and the object is recognized correctly.



Then, the network converts this expression to an image. This is a rough flow to convert real images to AI virtual images.



"This research is based on NVIDIA's AI research published at NeurIPS "



What can be done other than converting real images to digital images by using this AI, "Wang can also let my collaborative author dance in Gangnam style ".



Only Gangnam style movies are prepared.



Based on this, Mr. Liu dances Kangnamese style crisply and completes a movie.



"This is a picture made by a machine, it is not me," Mr. Liu told with a sharp tone and the movie ended.

in Software,   Video, Posted by logu_ii