A senior developer's personal journey through the five stages of grief regarding LLMs in software development, arriving at a nuanced acceptance. Key insights include: LLMs excel at documentation, tests, and cloning existing patterns but fail on novel or complex architecture without expert guidance; context (RAG) matters far more than model choice; models operate as finite state machines with no real memory; benchmarks are largely meaningless and model performance varies dramatically by language; and practical tips include forcing models to plan before coding, monitoring reasoning traces in real time, using the language most represented in training data, and even being polite. The author is skeptical of hype from non-developers and argues that understanding how LLMs work under the hood is a prerequisite for using them effectively.

10m read timeFrom rocket-science.ru
Post cover image
Table of contents
Tests and BenchmarksContextPlans and ReasoningLanguagePoliteness

Sort: