Researchers from Goodfire.ai discovered that AI language models process memorization and reasoning through separate neural pathways. By removing memorization circuits, models lost 97% of their ability to recite training data but retained logical reasoning capabilities. Surprisingly, arithmetic operations rely on memorization pathways rather than logic circuits, explaining why language models struggle with math—they recall arithmetic facts from memory instead of computing them. The study used OLMo-7B to demonstrate this mechanistic split, showing distinct activation patterns between memorized and general text processing at specific neural network layers.
Sort: