Introducing a system that allows anyone to become a VTuber while richly expressing joy, anger, sadness, and joy from just one image



On February 2, 2021, Pramuk Kangaan, who works as a software engineer at Google, announced that he has developed a system that can create a variety of facial expressions from a single character image. This system allows you to freely move your eyes, mouth, irises, etc., and it is also possible to reflect your movements in your facial expressions in real time.

Talking Head Anime from a Single Image 2: More Expressive (Full Version)

https://pkhungurn.github.io/talking-head-anime-2/full.html

I created a system that generates more expressive animation using a single character image.
https://pkhungurn.github.io/talking-head-anime-2/index-ja.html




The overview of the system and how you can become a VTuber is explained in the following movie.

I created a system that allows you to become an expressive VTuber with a single image - Nico Nico Douga



Kangaan started creating this system in 2019 with the goal of making it easier to become a Vtuber. The system allows you to load a single image as an input image and output the image with various facial expressions, but at first it was only possible to rotate the character's face and open and close the eyes and mouth. About. At this time, there were only six types of pose vectors to specify poses, and the character could only take six types of movements.



In order to increase the character's facial expressions, Kangaan repeated

annotations and programming, and at the end of the nine months he succeeded in increasing the number of pose vectors to 42. There are three types of head rotation in the pose vector...



4 types of iris movements



12 types of eyebrow movements



12 types of eye movements



There are 11 types of mouth movements.



This allows the character to move through 42 different pose vectors and take on many detailed facial expressions.



This system is compatible with both male and female characters, and it is possible to change hair and eyes visible through glasses without any problems. Also, in the previous system, if the input image had a closed mouth, the mouth in the output image could not be opened, but this time the mouth opens in a somewhat appropriate manner.

This system also uses an iOS app called

iFacialMocap to take a picture of yourself with an iPhone and reflect your facial expressions on the character in real time. You can see how this function is used in the movie below.

Yet Another Tool to Transfer Human Facial Movement to Anime Characters in Real Time - YouTube


It is also possible to use recorded footage to reflect facial expressions on any character. Letting VTubers play Uirobi, who is famous for long lines...

I chanted Uirobi and transferred the movement to a VTuber image. - YouTube


It is now possible to do things like sing songs.

I lip-synced 'Bakataitai' and had a VTuber's image sing it. - YouTube


According to Kangaan, there are still things that need to be improved with this system, such as ``Only the movement that the 3D model can do can be reflected in the image'' and ``There are limitations on the input image.'' He says he will resolve these issues in his next project.



in Software,   Video,   , Posted by log1p_kr