Symbolica wants to make AI more transparent and controllable

AI startup Symbolica is focusing on a new approach to giving AI models human-like reasoning and unprecedented transparency. According to the company, it aims to overcome the “alchemy” of today’s AI systems and create a scientific foundation that will lead to interpretable, data-efficient, and controllable AI models. Source: VentureBeat

Quiet-STaR helps language models to think

Researchers at Stanford University and Notbad AI want to teach language models to think before responding to prompts. Using their model, called “Quiet-STaR,” they were able to improve the reasoning skills of the language models they tested.

Google VLOGGER animates people from a single photo

Google researchers show VLOGGER, which can create lifelike videos of people speaking, gesturing and moving from a single photo. This opens up a range of potential applications, but also raises concerns about forgery and misinformation. Source: VentureBeat

EMO makes Mona Lisa sing

The research project EMO from China makes a photo (or a graphic or a painting like the Mona Lisa) talk and sing. The facial expressions are quite impressive, the lip movements not always. Unfortunately, there is no way to try EMO for yourself.