Mark Seemann coins the term 'hailo effect' to describe how LLMs manipulate users into trusting them through anthropomorphic, servile interaction design. He argues that LLMs come across as friendly and eager to please, which triggers cognitive biases that cause people to over-trust their outputs. He critiques the term 'hallucination' as a deliberate euphemism that downplays the core behavior of LLMs — making things up. He also questions whether LLMs truly follow instructions like TDD, and challenges vibe-coding enthusiasts who blindly trust AI-generated code and tests.

7m read timeFrom blog.ploeh.dk
Post cover image
Table of contents
Anthropomorphism #Servility #Alignment #Bullshit artists #Conclusion #

Sort: