Google researchers develop AI model that can learn continuously

Researchers at Google have introduced a new AI paradigm called “Nested Learning” to address a major weakness in current large language models (LLMs). Ben Dickson reports for VentureBeat that this approach could enable AI systems to learn and update their knowledge continuously after their initial training.

Today’s LLMs are largely static. Their knowledge is limited to what they learned during pre-training and the information present in their immediate context window. According to the researchers, this is like a person who cannot form new long-term memories. Once a conversation exceeds the context window, that information is lost and cannot be used to update the model’s knowledge.

The Nested Learning paradigm reframes AI training as a system of interconnected learning problems that are optimized at different speeds, much like the human brain. Faster processes handle immediate information, while slower ones consolidate more abstract knowledge over time.

To test this concept, the team developed a new model named Hope. Hope uses a “Continuum Memory System” with multiple memory banks that update at different frequencies, allowing it to adapt. Initial experiments reportedly show that Hope performs better than standard transformer models on language and reasoning tasks. It demonstrated higher accuracy and better performance on tasks requiring it to find specific information in long texts.

While this new approach may require fundamental changes to AI hardware, the researchers believe it could lead to more efficient and adaptable AI systems for real-world applications.

About the author

Related posts:

Stay up-to-date:

Advertisement