Runway launches GWM-1 world model for robotics and simulation

Runway has released GWM-1, its first world model system designed to simulate physical environments in real time. The announcement positions the AI video company alongside competitors like Google in the emerging world model space.

A world model is an AI system that builds an internal representation of how environments work, allowing it to predict future events without explicit training on every scenario. Runway’s approach generates video frame by frame and can be controlled through inputs like camera movements, robot commands, and audio.

The company released three specialized versions of the model. GWM-Worlds creates explorable virtual environments that maintain spatial consistency as users navigate through them. GWM-Robotics generates synthetic training data for robot development and allows testing of robotic policies in simulation rather than on physical hardware. GWM-Avatars produces interactive human characters with realistic facial expressions and gestures driven by audio input.

According to Runway CTO Anastasis Germanidis, the company views pixel prediction at scale as the path to general-purpose simulation. The system runs at 24 frames per second in 720p resolution and can generate up to two minutes of video.

Runway also updated its Gen-4.5 video model with native audio generation and editing capabilities, plus multi-shot video editing that applies consistent changes across sequences of arbitrary length. These features bring the model closer to competitor offerings like Kling’s integrated video suite.

The company stated it is in discussions with robotics firms and enterprises about implementing GWM-Robotics and GWM-Avatars. GWM-Robotics will be available through a Python SDK.

Sources: Runway Announcement, TechCrunch

About the author

Related posts:

Stay up-to-date:

Advertisement