Jay Wengrow, author of 'A Common-Sense Guide to AI Engineering', discusses how to build production-ready LLM-powered applications. He explains the mechanics behind AI agents and tool use, describing how special notations in system prompts allow LLMs to trigger real functions. Topics covered include guardrails for filtering undesirable output, multi-agent architectures for complex tasks, context management, and when to use frameworks versus building from scratch. Wengrow advocates for understanding fundamentals over relying on frameworks, noting that many developers have migrated away from orchestration frameworks due to debugging difficulties and lack of customization. He also shares his rule of thumb: reach for a framework only when it can do something better than you can, not just faster.

26m watch time

Sort: