Peter Naur's 1985 essay argued that the core output of software engineering is the mental model ("theory") engineers build of a system, not the code itself. This post applies that framework to AI-assisted coding. Using AI agents does reduce the detail of a developer's mental model, but every mental model already abstracts away some details — the question is degree, not kind. The author argues that in practice, only about 10% of AI agent output makes it into production, because the developer's theory is constantly being used to filter and reject agent suggestions. On whether AI agents can build their own Naur theories, the author sees evidence they can — at least locally and for common patterns — but the key limitation is that agents cannot retain theories across sessions, having to reconstruct understanding from scratch every time. The next major breakthrough in AI coding agents will likely involve some form of persistent theory-building, whether through weight modification, continuous learning, or extended context windows.
Table of contents
Do LLMs let you skip theory-building?Can LLMs build Naur theories?Retaining theories is better than building themSort: