Ray3 Modify transforms video footage in breathtaking ways

Luma AI has released Ray3 Modify, a new video generation model that allows users to modify existing footage while preserving the original performance. The tool is available through the company’s Dream Machine platform.

Ray3 Modify addresses a limitation in AI video generation: the difficulty of maintaining timing, motion, and emotional intent when transforming scenes. The model uses human-led input footage as the foundation, allowing AI to follow real-world motion, timing, and emotional delivery rather than generating content from scratch.

Core capabilities for video production

The new model introduces several features designed for professional workflows. Users can now provide start and end frames to guide video transitions and maintain spatial continuity. A character reference feature allows creators to apply custom character identities onto an actor’s original performance, maintaining costume and likeness consistency throughout a shot.

According to Luma AI, the model preserves an actor’s original motion, timing, eye line, and emotional delivery while transforming visual attributes and environments. The company states that the enhanced architecture delivers more reliable adherence to physical motion and composition.

“Generative video models are incredibly expressive but also hard to control,” said Amit Jain, CEO and co-founder of Luma AI. “This means creative teams can capture performances with a camera and then immediately modify it to be in any location imaginable, change costumes, or even go back and reshoot the scene with AI, without recreating the physical shoot.”

How the system works

Users can upload video footage of up to 10 seconds and provide a character reference image. The system adapts the character reference to match the visual style and lighting of the input video. A modify strength slider controls how closely the system follows the original footage, ranging from subtle changes like retexturing to more abstract transformations.

The tool allows multiple workflow combinations. Users can modify video with character references alone, combine keyframes with character references, or use the reference mode for text-to-video generation with consistent character identity. When modifying keyframes, users can write instructions that reference both the character image and the target frame.

Luma AI describes the model as purpose-built for hybrid workflows where creative authority starts with the performer or camera, and AI extends or transforms that direction. The company positions Ray3 Modify as a tool for production workflows in film, advertising, and post-production.

The startup competes with companies like Runway and Kling in the AI video generation space. Luma AI received $900 million in funding in November, led by Humain, with participation from Andreessen Horowitz, Amplify Partners, and Matrix Partners. The company’s models are used by entertainment studios, advertising agencies, and technology partners including Adobe and AWS.

Sources: Luma AI, Luma AI, TechCrunch

About the author

Related posts:

Stay up-to-date:

Advertisement