AI insiders seek to poison the data that feeds them
This title could be clearer and more informative.Try out Clickbait Shieldfor free (5 uses left this month).
AI industry insiders have launched Poison Fountain, a project encouraging website operators to feed poisoned training data to AI crawlers. The initiative, inspired by Anthropic research showing only a few malicious documents can degrade model quality, aims to undermine AI systems by distributing incorrect code with subtle bugs. Five individuals, reportedly working at major US tech companies, are behind the effort, citing concerns about AI as an existential threat. The project overlaps with existing concerns about model collapse, where AI systems degrade by training on their own synthetic output and misinformation.
Sort: