MIT spin-off Liquid AI shows its highly efficient models

A MIT spin-off called Liquid AI has unveiled new AI models that are not based on the usual transformer architecture. According to the company, these “Liquid Foundation Models” (LFMs) already outperform comparable transformer-based models in performance and efficiency. Liquid AI announced this in a statement. Instead of transformers, the developers used approaches from the theory of dynamical systems, signal processing, and numerical linear algebra.

The LFMs come in three sizes: LFM 1.3B, LFM 3B, and LFM 40B MoE. They are said to be particularly memory-efficient. For example, the LFM-3B model requires only 16 GB of memory, while Meta’s Llama-3.2-3B needs over 48 GB. According to Liquid AI, even the smallest variant, LFM 1.3B, outperforms Meta’s Llama 3.2-1.2B and Microsoft’s Phi-1.5 in many leading benchmarks, including the MMLU test for understanding in STEM subjects.

Liquid AI is targeting applications in areas such as finance, biotechnology, and consumer electronics with the new models. The models are not open source but are only accessible through specific platforms like Lambda Chat or Perplexity AI. The company plans an official product launch on October 23, 2024, at MIT. Until then, Liquid AI intends to publish a series of technical blog posts and invites developers to test the models and provide feedback.

Related posts:

Stay up-to-date: