Introducing a system that allows anyone to become a VTuber while expressing emotions and emotions from just one image



On February 2, 2021, Plumuk Kangan, who works as a software engineer at Google, announced that he has developed a system that can create various facial expressions from a single character image. This system allows you to move your eyes, mouth, and iris freely, and it is also possible to reflect your movements in your facial expressions in real time.

Talking Head Anime from a Single Image 2: More Expressive (Full Version)

https://pkhungurn.github.io/talking-head-anime-2/full.html

I tried to make a system that generates more expressive animation with one character image
https://pkhungurn.github.io/talking-head-anime-2/index-ja.html




The following movie explains the outline of the system, how you can become VTuber.

I tried to make a system that can be an expressive VTuber with one image --Nico Nico Douga



Kanghan started creating this system in 2019 with the goal of making it easy to become a Vtuber. By loading one image as an input image, it is a system that can output images changed to various facial expressions, but at first it was only possible to rotate the character's face and open and close the eyes and mouth. That thing. At this time, there were only 6 types of pose vectors for specifying poses, and the character could only take 6 types of movements.



Mr. Kanghan repeated

annotation and programming to increase the facial expression of the character, and succeeded in increasing the number of pose vectors to 42 types after 9 months. There are 3 types of head rotation in the pose vector ...



4 types of iris movement



12 types of eyebrow movement



12 types of eye movements



There are 11 types of mouth movements.



This allows the character to move 42 different pose vectors to create a number of detailed facial expressions.



This system is compatible with both male and female characters, and it is possible to change the eyes seen through hair and glasses without problems. Also, in the previous system, if the input image had a closed mouth, the mouth of the output image could not be opened, but this time it is said that it is opened in an appropriate form to some extent.

In addition, this system can also

take a picture of yourself with an iPhone using an iOS application called iFacialMocap and reflect the facial expression on the character in real time. You can see how this function is used in the following movie.

Yet Another Tool to Transfer Human Facial Movement to Anime Characters in Real Time --YouTube


It is also possible to use the recorded video to reflect facial expressions on any character. The famous long Zerifu to VTuber Uiro sales ...... or make it play a

I chanted Uiro Uri and moved the movement to the image of VTuber. --YouTube


It is possible to sing a song.

I lip-synched 'Idiot' and let the image of VTuber sing. --YouTube


According to Mr. Kanghan, there are still many points to be improved in this system, such as 'only the movements that a 3D model can make can be reflected in the image' and 'the input image is limited'. He says these issues will be resolved in the next project.



in Software,   Video, Posted by log1p_kr