New AI architecture STAR reduces model cache size by 90 percent

MIT startup Liquid AI has developed a new AI framework called STAR (Synthesis of Tailored Architectures) that significantly improves upon traditional Transformer models. As reported by Carl Franzen for VentureBeat, the system uses evolutionary algorithms to automatically generate and optimize AI architectures. The STAR framework achieved a 90% reduction in cache size compared to traditional Transformers while maintaining or improving performance. The technology employs “STAR genomes” for iterative optimization, allowing it to explore various architectural designs tailored to specific requirements. Testing showed that STAR-evolved models also reduced parameter counts by up to 13% while improving benchmark performance. The research team, including Armin W. Thomas and colleagues, has published their findings in a peer-reviewed paper, making the technology available to the broader AI research community.

Related posts:

Stay up-to-date: