Why agents DO NOT write most of our code - a reality check

This title could be clearer and more informative.Try out Clickbait Shieldfor free (5 uses left this month).

An AI engineer shares hands-on experience testing coding agents like Cursor and Claude Code for a week-long feature implementation. Despite industry claims of 25-80% AI-generated code, the experiment revealed significant limitations: agents produced thousands of lines requiring extensive review, ignored coding conventions, made

10m read timeFrom dev.to
Post cover image
Table of contents
Experimenting with coding agents in day to day codingThe feature we tried to build (with AI)First try: Running wildTake two: Smaller, incremental changesThe issues that really matterThe good parts of coding agents
3 Comments

Sort: