Runway launches Gen-4 AI video model with improved consistency features

Runway has unveiled its latest AI video generation model, Gen-4, touting significant improvements in character and scene consistency. According to Kyle Wiggers, who reported on the release, the model is now available to Runway’s individual and enterprise customers. The company claims Gen-4 can maintain coherent environments and characters across different scenes while accurately simulating real-world physics.

The new model allows users to generate consistent characters across varying lighting conditions using reference images. Users can also provide subject images and describe desired compositions to craft specific scenes.

“Gen-4 can utilize visual references, combined with instructions, to create new images and videos utilizing consistent styles, subjects, locations, and more,” Runway stated in a blog post.

Backed by investors including Salesforce, Google, and Nvidia, Runway competes with industry giants like OpenAI and Google in the rapidly evolving AI video generation space. The company has positioned itself within the entertainment industry through studio partnerships and funding initiatives for AI-generated films.

However, Runway faces legal challenges regarding its training data sources. The company is currently involved in a lawsuit brought by artists alleging unauthorized use of copyrighted artwork for model training. Runway has invoked the fair use doctrine as its defense.

The release comes at a significant time for Runway, which is reportedly seeking new funding that would value the company at $4 billion. According to The Information, Runway aims to reach $300 million in annualized revenue this year following the launch of products like its video-generating API.

Related posts:

Stay up-to-date: