‘Unbelievably dangerous’: experts sound alarm after ChatGPT Health fails to recognise medical emergencies
This title could be clearer and more informative.Try out Clickbait Shieldfor free (5 uses left this month).
A study published in Nature Medicine found that ChatGPT Health, OpenAI's health advice feature, failed to recommend emergency care in over 51% of cases where it was medically necessary. The platform also showed critical failures in detecting suicidal ideation — a crisis intervention banner disappeared entirely when lab results
1 Comment
Sort: