Slack Engineering shares how they manage context in long-running multi-agent security investigation systems. The core challenge is that LLM APIs are stateless, and complex investigations spanning hundreds of inference requests can exhaust context windows. Their solution uses three complementary context channels: a Director's

15m read timeFrom slack.engineering
Post cover image
Table of contents
The Challenge of Long-run CoherenceThe Director’s JournalThe Critic’s Review ToolsAnnotated FindingsCritic’s TimelineEvent SequenceEvidence GapsMessage HistoryConclusion

Sort: