Which Design Doc Did a Human Write?

This title could be clearer and more informative.Try out Clickbait Shieldfor free (5 uses left this month).

An author spent 16 hours writing a design doc by hand, then generated two more using Claude Opus and GPT-5 (Codex), and challenged readers to identify the human-written version. Just under 50% of readers correctly identified the human version. Key human giveaways included personal opinions, anecdotal experience, unusual technology choices, and organic structure. AI giveaways included verbose bloat, meaningless bold text, overly precise time estimates, and generic security/privacy sections that lacked app-specific reasoning. The author concludes that AI-generated design docs miss the hard problems — the competing interests and tough decisions that define real design thinking.

7m read timeFrom refactoringenglish.com
Post cover image
Table of contents
The answer 🔗Which version did readers think was AI? 🔗What made me obviously human 🔗What made the AI obviously AI? 🔗What made readers guess incorrectly? 🔗Images were a dead giveaway 🔗My reviews of the other docs 🔗Implementation now in progress 🔗

Sort: