A mental model for building trust in coding agents by constructing an 'outer harness' of feedforward guides and feedback sensors. Guides steer agents before they act (e.g., AGENTS.md, code mods, skills), while sensors detect issues after the fact and enable self-correction (e.g., linters, structural tests, AI code review).

13m read timeFrom martinfowler.com
Post cover image
Table of contents
Metaphors only go so farFeedforward and FeedbackComputational vs InferentialHow does harness engineering relate to context engineering?The steering loopTiming: Keep quality leftRegulation categoriesHarnessabilityAmbient affordancesHarness templatesAshby's LawThe role of the humanA starting point - and open questions

Sort: