Runway releases 'Act-One,' a tool that can animate AI-generated characters with realistic facial expressions



AI development company Runway has released an AI tool called ' Act-One ' that allows you to easily transfer the facial expressions of a person to an AI-generated character just by taking a video of the person. Act-One is available to anyone who has access to Runway's video generation AI model ' Gen-3 Alpha .'

Runway Research | Introducing Act-One
https://runwayml.com/research/introducing-act-one

Introducing Act-One | Runway - YouTube





The task of giving facial expressions to 3D models, known as 'facial animation,' has previously required complex processes such as motion capture equipment and filming from various angles. Below is a look at the filming of ' Rise of the Planet of the Apes .' Actors are required to wear numerous sensors for motion capture while acting.

Turning Human Motion-Capture into Realistic Apes in Dawn of the Planet of the Apes | WIRED - YouTube


However, in recent years, various AI companies have been developing AI tools that enable easy facial animation, and Runway's Act-One is one of them.

Act-One is an AI tool that focuses on modeling facial expressions. By simply capturing footage using a smartphone camera or other device, it is possible to transfer the facial expressions of a subject onto an AI-generated character.




Runway says of Act-One, 'Act-One requires no motion capture or character rigging and can convert a single video input into countless different character designs and different styles.'




One of Act-One's strengths is its ability to deliver realistic cinematic quality output from a variety of camera angles and focal lengths, allowing even small creators to achieve complex character performances that would be difficult to achieve without expensive equipment and complex workflows. According to Runway, all you need is a smartphone camera and a single actor reading a script to create emotive content.




'We're past the point where the question was whether a generative model could produce consistent video; good models are now the new baseline,' said Cristobal Valenzuela, CEO of Runway. 'The difference is in how we think about applications and use cases, and ultimately what we build.'




Act-One, which can output realistic facial expressions, has a comprehensive set of safety measures in place, including safeguards to automatically detect and block attempts to generate unauthorized content of public figures and celebrities, and technical tools to ensure that the voices used by users are legitimate. It also continuously monitors whether the tools are being used responsibly and for any misuse.

Act-One will be gradually rolled out to users of Runway's video generation AI model 'Gen-3 Alpha' from October 22, 2024.

Runway releases video generation AI model 'Gen-3 Alpha', anyone can generate 5-10 second videos within a few days - GIGAZINE



Runway announced a partnership with film and television production company Lionsgate in September 2024 to use AI in film production. 'Runway is a visionary, best-in-class partner that will help us leverage AI to develop cutting-edge, capital-efficient content creation opportunities. We see AI as an excellent tool to enhance and complement our current operations,' said Michael Burns, vice chairman of Lionsgate, in a release.

in Software,   Video, Posted by log1r_ut