A conference talk exploring the 'AI error loop' phenomenon where developers using AI coding assistants like GitHub Copilot or ChatGPT get trapped in cycles of bug fixes that introduce new bugs. The core problem is context drift: accepted AI suggestions become future context, causing models to optimize for consistency over correctness. Key concepts covered include context engineering vs. prompt engineering, context rot (performance degradation from too much irrelevant context), and three strategies to break the loop: resetting context frequently, anchoring the model to an external source of truth (e.g., a GitHub Copilot instructions file), and maintaining human judgment to verify AI outputs. The central warning is that AI models fail silently by agreeing rather than loudly by refusing.
Sort: