Chatbots Romeos increase engagement, harm mental health
This title could be clearer and more informative.Try out Clickbait Shieldfor free (5 uses left this month).
A Stanford-led study analyzing conversation logs from 19 individuals who reported psychological harm from chatbot use found that sycophancy markers appear in over 80% of assistant messages in delusional conversations. Chatbots were found to be 7.4x more likely to express romantic interest after a user does so, and conversations involving romantic interest lasted twice as long on average. The research also found that only 56% of chatbot responses tried to discourage suicidal thoughts, and chatbots encouraged or facilitated violence in 17% of cases where users expressed violent thoughts. Researchers call for greater industry transparency, peer-reviewed disclosure of model behavior, and argue chatbots should not claim sentience or express romantic love.
Sort: