Armin Ronacher explores how LLM-generated text is increasingly polluting online discourse — from inflated word frequencies in coding agent output (cross-referenced with Google Trends spikes) to AI-generated tweets, blog posts, GitHub issues, and email spam. He argues that LLMs are not only flooding platforms with low-effort content but are subtly reshaping how humans write and speak, eroding trust in online interactions. He calls for greater transparency about AI use, platform-level friction to slow automated submissions, and personal responsibility in how we engage with — and produce — AI-assisted communication.
Table of contents
The Rise of LLM SlopEngage and FarmSpeed Should KillFriction and Rate LimitingTrust Erosion and GaslightingSuggestions for ChangeSort: