Entropix: New AI technique improves reasoning by detecting uncertainty

Researchers at XJDR have developed a new technique called Entropix that aims to improve reasoning in language models by making smarter decisions when the model is uncertain, according to a recent blog post by Thariq Shihipar. The method uses adaptive sampling based on two metrics, entropy and varentropy, which measure the uncertainty in the model’s predictions.

The post explains that uncertainty in the model’s predictions can have various causes, such as synonyms, branching paths, or genuine uncertainty due to lack of training data. Entropix suggests using different methods for choosing the next token depending on the level and type of uncertainty, such as branching predictions, inserting “thinking” tokens, or adjusting the sampling temperature. While the technique has not yet been evaluated on a large scale, the post suggests that it could be a promising direction for improving reasoning in language models without requiring huge budgets.

Related posts:

Stay up-to-date: