AI agents need more than facts — they need institutional memory capturing why decisions were made. Using three variants of the classic Zebra Puzzle as a framework, the post explores three layers of agent reasoning: strict logical constraints (rules engines), natural language ambiguity requiring probabilistic reinterpretation,
Table of contents
Where does the Zebra live?Get Andreas Kollegger’s stories in your inboxWhy Graphs? Knowledge and Thinking in One StructureWhat’s coming nextWhere This Goes NextSort: