Quiet-STaR helps language models to think

Researchers at Stanford University and Notbad AI want to teach language models to think before responding to prompts. Using their model, called “Quiet-STaR,” they were able to improve the reasoning skills of the language models they tested.

Google VLOGGER animates people from a single photo

Google researchers show VLOGGER, which can create lifelike videos of people speaking, gesturing and moving from a single photo. This opens up a range of potential applications, but also raises concerns about forgery and misinformation. Source: VentureBeat

EMO makes Mona Lisa sing

The research project EMO from China makes a photo (or a graphic or a painting like the Mona Lisa) talk and sing. The facial expressions are quite impressive, the lip movements not always. Unfortunately, there is no way to try EMO for yourself.