LLM-generated "skills" (explanatory prompts for specific tasks) work better when created after solving a problem rather than before. A recent paper found that pre-generated skills provide no benefit because they bake in incorrect assumptions from training data. The effective approach is to have the LLM solve the problem through iteration first, then distill that learned experience into a reusable skill document. This captures knowledge gained from millions of tokens of problem-solving rather than just regurgitating existing training data.

4m read timeFrom seangoedecke.com
Post cover image
3 Comments

Sort: