Sakana AI introduces Continuous Thought Machines, a novel neural network that mimics brain processes

Sakana AI, co-founded by former Google AI scientists, has unveiled a new neural network architecture called Continuous Thought Machines (CTM). Unlike traditional transformer-based models that process information in parallel, CTMs incorporate a time-based dimension that mimics how biological brains operate, allowing for more flexible and adaptive reasoning.

The key innovation in CTMs is their treatment of neural timing and synchronization as fundamental aspects of computation. Each artificial neuron maintains a history of its previous states and uses private “neuron-level models” to determine when to activate. The synchronization between neurons forms the representation that the system uses to observe and predict.

Key features of Continuous Thought Machines

  • A decoupled internal dimension called “ticks” that allows neural activity to unfold over time
  • Private neuron-level models that process the history of incoming signals
  • Neural synchronization as the primary representation for making decisions

According to Sakana AI’s researchers, this approach bridges the gap between modern AI efficiency and the biological plausibility of brain-like computation. In demonstrations ranging from image classification to maze solving, CTMs have shown the ability to adaptively allocate computational resources based on task complexity.

While CTMs don’t yet match state-of-the-art performance on benchmarks like ImageNet, they offer advantages in interpretability and calibration. The architecture is open-sourced on GitHub with pretrained models and interactive demonstrations available.

Related posts:

Stay up-to-date: