Introducing a system that allows anyone to become a VTuber while expressing emotions and emotions from just one image
On February 2, 2021, Plumuk Kangan, who works as a software engineer at Google, announced that he has developed a system that can create various facial expressions from a single character image. This system allows you to move your eyes, mouth, and iris freely, and it is also possible to reflect your movements in your facial expressions in real time.
Talking Head Anime from a Single Image 2: More Expressive (Full Version)
I tried to make a system that generates more expressive animation with one character image
I made a system (v2) that can be a VTuber with one image. By simply preparing the front image of the character, you can operate the eyebrows, eyes, iris, and mouth, and the face can also be rotated. The character becomes more expressive and the lip sync is also nice. For more information, please visit https://t.co/0UF9sJnQkA . # Machine learning # deep learning #Vtuber Pic.Twitter.Com/LQazpLh5rm— Dragonmeteor (aka masterspark) (@dragonmeteor) February 3, 2021
The following movie explains the outline of the system, how you can become VTuber.
I tried to make a system that can be an expressive VTuber with one image --Nico Nico Douga
Kanghan started creating this system in 2019 with the goal of making it easy to become a Vtuber. By loading one image as an input image, it is a system that can output images changed to various facial expressions, but at first it was only possible to rotate the character's face and open and close the eyes and mouth. That thing. At this time, there were only 6 types of pose vectors for specifying poses, and the character could only take 6 types of movements.
annotation and programming to increase the facial expression of the character, and succeeded in increasing the number of pose vectors to 42 types after 9 months. There are 3 types of head rotation in the pose vector ...
Mr. Kanghan repeated
4 types of iris movement
12 types of eyebrow movement
There are 11 types of mouth movements.
This allows the character to move 42 different pose vectors to create a number of detailed facial expressions.
take a picture of yourself with an iPhone using an iOS application called iFacialMocap and reflect the facial expression on the character in real time. You can see how this function is used in the following movie.
This system is compatible with both male and female characters, and it is possible to change the eyes seen through hair and glasses without problems. Also, in the previous system, if the input image had a closed mouth, the mouth of the output image could not be opened, but this time it is said that it is opened in an appropriate form to some extent.
In addition, this system can also
Yet Another Tool to Transfer Human Facial Movement to Anime Characters in Real Time --YouTube
It is also possible to use the recorded video to reflect facial expressions on any character. The famous long Zerifu to VTuber Uiro sales ...... or make it play a
I chanted Uiro Uri and moved the movement to the image of VTuber. --YouTube
It is possible to sing a song.
I lip-synched 'Idiot' and let the image of VTuber sing. --YouTube
According to Mr. Kanghan, there are still many points to be improved in this system, such as 'only the movements that a 3D model can make can be reflected in the image' and 'the input image is limited'. He says these issues will be resolved in the next project.