Large language models like ChatGPT are producing falsehoods more accurately described as 'bullshit' rather than 'hallucinations'. These models generate human-like text by analyzing probabilities rather than aiming for truth. Describing their inaccuracies as bullshit is argued to be a more useful framework for understanding and discussing their behavior, particularly since these models are designed to produce convincing text rather than accurate information.

43m read timeFrom link.springer.com
Post cover image
Table of contents
ChatGPT is a soft bullshitterChatGPT as hard bullshit
54 Comments

Sort: