Runway releases video generation AI model 'Gen-3 Alpha,' making 5-10 second videos available to anyone within days
AI company Runway has announced Gen-3 Alpha , a new video generation AI model trained on new infrastructure built for large-scale multi-modal training.
Introducing Gen-3 Alpha: A New Frontier for Video Generation
https://runwayml.com/blog/introducing-gen-3-alpha/
Runway unveils new hyper-realistic AI video model Gen-3 Alpha | VentureBeat
https://venturebeat.com/ai/runway-unveils-new-hyper-realistic-ai-video-model-gen-3-alpha-capable-of-10-second-long-clips/
Gen-3 Alpha is an AI model capable of generating videos with complex scene changes, a wide range of cinematic options, and detailed art direction capabilities.
Introducing Gen-3 Alpha: Runway's new base model for video generation.
— Runway (@runwayml) June 17, 2024
Gen-3 Alpha can create highly detailed videos with complex scene changes, a wide range of cinematic choices, and detailed art directions. https://t.co/YQNE3eqoWf
(1/10) pic.twitter.com/VjEG2ocLZ8
Runway described Gen-3 Alpha as 'the first in a series of upcoming AI models that Runway will train on infrastructure built for large-scale multi-modal training and marks an important step in Runway's goal of building general-world models.'
The video attached to the post below was generated from the prompt, 'Subtle reflections of a woman on the window of a train moving at hyper-speed in a Japanese city.'
Gen-3 Alpha is the first of an upcoming series of models trained by Runway on a new infrastructure built for large-scale multimodal training, and represents a significant step towards our goal of building General World Models.
— Runway (@runwayml) June 17, 2024
Prompt: Subtle reflections of a woman on the window… pic.twitter.com/Lw54twUTbs
Gen-3 Alpha is trained on both video and images, so it can not only generate video from text, but also generate video from images and images from text. In addition, control functions such as motion brushes, advanced camera control functions, and director tools will be enhanced in the future. Gen-3 Alpha will be released with new safety measures such as the new and improved Runway visual moderation system and C2PA , a standard for proving the authenticity of digital content.
The video attached to the post below was generated from the prompt, 'An astronaut running through an alley in Rio de Janeiro.'
Trained jointly on videos and images, Gen-3 Alpha will power Runway's Text to Video, Image to Video and Text to Image tools, existing control modes such as Motion Brush, Advanced Camera Controls and Director Mode, and upcoming tools to enable even more fine-grained control over… pic.twitter.com/sWXIb3NXgm
— Runway (@runwayml) June 17, 2024
Runway highlights that Gen-3 Alpha is a thoroughly trained video-generative AI model for creative applications, and is the collaborative effort of a multidisciplinary team of research scientists, engineers and artists.
The video attached to the post below was generated with the prompt 'FPV moving through a forest to an abandoned house to ocean waves.'
Gen-3 Alpha was trained from the ground up for creative applications. It was a collaborative effort from a cross-disciplinary team of research scientists, engineers and artists.
— Runway (@runwayml) June 17, 2024
Prompt: FPV moving through a forest to an abandoned house to ocean waves.
(4/10) pic.twitter.com/537sUxJ85A
Runway is also working with and partnering with major entertainment and media companies to develop custom versions of the Gen-3 Alpha, which will be part of the Gen-3 model family, to enable more stylized and consistent character generation and to target specific artistic and narrative requirements.
The video attached to the post below was generated from the prompt, 'An older man playing piano, lit from the side.'
As part of the family of Gen-3 models, we have been collaborating and partnering with leading entertainment and media organizations to create custom versions of Gen-3 Alpha.
— Runway (@runwayml) June 17, 2024
Customization of Gen-3 models allows for even more stylistically controlled and consistent characters,… pic.twitter.com/ebfyfzGoJv
Runway said of Gen-3 Alpha, 'This technological breakthrough marks an important milestone in our efforts to empower artists and pave the way for the next generation of creative and artistic innovation.' Gen-3 Alpha will be available to everyone within the next few days.
The video attached to the post below was generated from the prompt, 'A slow cinematic push in on an ostrich standing in a 1980s kitchen.'
This leap forward in technology represents a significant milestone in our commitment to empowering artists, paving the way for the next generation of creative and artistic innovation.
— Runway (@runwayml) June 17, 2024
Gen-3 Alpha will be available for everyone over the coming days.
Prompt: A slow cinematic push… pic.twitter.com/cLaZvGpeu6
The video attached to the post below was generated from the prompt, 'A middle-aged sad bald man becomes happy as a wig of curly hair and sunglasses fall suddenly on his head.'
Prompt: A middle-aged sad bald man becomes happy as a wig of curly hair and sunglasses fall suddenly on his head.
— Runway (@runwayml) June 17, 2024
(7/10) pic.twitter.com/5cVSVdc9bf
The video attached to the post below was generated from the prompt, 'A colossal statue of an ancient warrior stands tall on a cliff's edge. The camera circles slowly, capturing the warrior's silhouette.'
Prompt: A colossal statue of an ancient warrior stands tall on a cliff's edge. The camera circles slowly, capturing the warrior's silhouette.
— Runway (@runwayml) June 17, 2024
(8/10) pic.twitter.com/4m27YkaOJ0
The video attached to the post below was generated from the prompt, 'An empty warehouse, zoom in into a wonderful jungle that emerges from the ground.'
Prompt: An empty warehouse, zoom in into a wonderful jungle that emerges from the ground.
— Runway (@runwayml) June 17, 2024
(9/10) pic.twitter.com/gAUdKPDfvl
The video attached to the post below was generated from the prompt, 'Handheld camera moving fast, flashlight light, in a white old wall in an old alley at night a black graffiti that spells 'Runway'.'
Prompt: Handheld camera moving fast, flashlight light, in a white old wall in an old alley at night, a black graffiti that spells 'Runway'.
— Runway (@runwayml) June 17, 2024
(10/10) pic.twitter.com/xRreX33g0r
According to technology media VentureBeat, the first release of Gen-3 Alpha will enable users to generate 5-10 second videos. It will take 45 seconds to generate a 5 second video, and 90 seconds to generate a 10 second video. Runway explains that Gen-3 Alpha will be available to everyone in the next few days, but according to information obtained by VentureBeat in an interview with the company's CTO, Anastasis Germanidis, Gen-3 Alpha will first be available to Runway paid subscription users.
It is not clear what was used for the training data of Gen 3-Alpha, but VentureBeat points out that 'Most of the major AI generative models do not disclose in detail what data they use for training. It is unclear whether the data was procured through a paid license agreement or scraped from the Internet.' VentureBeat asked Runway about the training data of Gen-3 Alpha, but the response was vague: 'We have an in-house research team that oversees all training, and we train the models using carefully selected in-house data sets.'
Related Posts: