AI chatbots are giving out people’s real phone numbers
This title could be clearer and more informative.Try out Clickbait Shieldfor free (5 uses left this month).
AI chatbots including Google Gemini, ChatGPT, and Grok are exposing real people's personal phone numbers and home addresses in their responses, causing real-world harassment. Cases include a software developer in Israel receiving misdirected customer service contacts after Gemini gave out his personal number, and University of Washington PhD students who were able to extract a colleague's cell number and a professor's home address through chatbot prompts. The root cause is PII embedded in LLM training data scraped from the web. Privacy guardrails exist but are inconsistent and bypassable. There is currently no reliable mechanism for individuals to verify or remove their data from model training sets, and existing privacy laws don't clearly cover publicly scraped data. Experts recommend removing personal data from the public web proactively, though this doesn't help if data was already used in training.
Table of contents
A 400% increase in AI-related privacy requestsSort: