Consistent characters in AI videos with Eggnog

Eggnog enables AI-generated videos with consistent characters. First you create the person, including outfits, then you storyboard the planned scenes of the clip, and finally you create the video. Eggnog aims to become the “YouTube for AI videos”. Sources: TechCrunch, Y Combinator

Video AI Pika adds sound

Pika already offers a “lip sync“ feature that makes people speak in generated videos. Now there is an option to add sound to a generated clip, such as background noises and effects. Source: VentureBeat

Video AI Story.com is promoting longer clips

While many AI videos are only a few seconds long, Story.com allows up to 1 minute. A storyboarding feature is supposed to ensure that the clips ultimately meet the user’s ideas and needs.

EMO makes Mona Lisa sing

The research project EMO from China makes a photo (or a graphic or a painting like the Mona Lisa) talk and sing. The facial expressions are quite impressive, the lip movements not always. Unfortunately, there is no way to try EMO for yourself.

Pika Lip-Sync Feature

AI video generator Pika shows off its lip sync feature that makes people speak in AI videos. The voice is either pre-recorded or created from text using Elevenlabs’ AI. The feature is currently only available to paying “Pro” users. Read more on VentureBeat.

OpenAI Sora: AI videos in a new level of quality

OpenAI has caused quite a stir with a preview of the video AI “Sora”. The examples on the official website are indeed impressive. However, without our own tests, it is not yet possible to make a meaningful assessment of how well Sora works in everyday life and what the video clips can be used for. … Read more