Sakana, a Tokyo-based AI startup, has introduced a new AI model architecture called Continuous Thought Machines (CTMs), designed to function more like human brains. Unlike traditional Transformer models, CTMs unfold computations over time, allowing each neuron to make activation decisions based on short-term memory, enhancing flexibility and reasoning depth. Though promising, CTMs remain largely experimental and are not yet optimized for commercial use. Sakana offers open-source tools for researchers to explore CTMs, highlighting the architecture's potential for adaptive tasks and improved interpretability in AI systems.
Table of contents
How CTMs differ from Transformer-based LLMsUsing variable, custom timelines to provide more intelligenceEarly results: how CTMs compare to Transformer models on key benchmarks and tasksWhat’s needed before CTMs are ready for enterprise and commercial deployment?What enterprise AI leaders should know about CTMsSakana’s checkered AI research historyBetting on evolutionary mechanismsSort: