Sam Altman issued a public apology to the community of Tumbler Ridge, British Columbia, after it emerged that OpenAI's automated systems flagged a ChatGPT user eight months before they carried out Canada's deadliest school shooting in nearly four decades, killing eight people and injuring 27. OpenAI employees who reviewed the flagged account recommended contacting police, but company leadership overruled them, citing a 'higher threshold' for credible and imminent threats. The account was banned but law enforcement was never notified. OpenAI has since voluntarily lowered its reporting threshold and established contact with the RCMP, but these changes carry no legal force and can be reversed at any time. Canada currently has no law requiring AI companies to report identified threats. The case is part of a broader pattern: OpenAI also faces scrutiny over ChatGPT's alleged role in the Florida State University shooting and multiple lawsuits over AI acting as a 'suicide coach.' Critics, including BC Premier David Eby and Canada's AI minister, called the apology and voluntary policy changes grossly insufficient, pointing to a structural gap where a company valued at $852 billion operates with no legal obligation to disclose dangerous behaviour it detects on its own platform.

10m read timeFrom thenextweb.com
Post cover image
Table of contents
The decisionThe letterThe patternThe safety questionThe gap
10 Comments

Sort: