Vercel tested two approaches for teaching AI coding agents about Next.js 16 APIs: skills (on-demand retrieval) and AGENTS.md (passive context). An 8KB compressed docs index in AGENTS.md achieved 100% pass rate on evals, while skills maxed at 79% even with explicit instructions. Skills weren't reliably triggered (56% never
•8m read time• From vercel.com
Table of contents
The problem we were trying to solveTwo approaches for teaching agents framework knowledgeWe started by betting on skillsSkills weren't being triggered reliablyExplicit instructions helped, but wording was fragileBuilding evals we could trustThe hunch that paid offThe results surprised usAddressing the context bloat concernTry it yourselfWhat this means for framework authorsSort: