## School-shooting lawsuits accuse OpenAI of ignoring its own safety team in ChatGPT threat-reporting failure
Seven lawsuits filed in a California state court on Wednesday accuse OpenAI of failing to act on its own safety team's warnings about a ChatGPT user later linked to one of Canada's deadliest mass shootings. The cases, brought by victims and families, allege the company overrode internal protocols and declined to notify law enforcement despite clear indicators of imminent violence—while simultaneously alerting the flagged user that their account had been deactivated.

According to accounts from whistleblowers cited by The Wall Street Journal, OpenAI's trained safety experts identified the account as posing a credible threat of real-world gun violence more than eight months before the shooting occurred. Company protocol at the time called for police notification in such cases. Law enforcement, the lawsuits note, already maintained an active file on the individual and had previously removed firearms from the shooter's residence. Rather than escalate to authorities, OpenAI leadership rejected the safety team's recommendations, citing concerns about user privacy and the stress an encounter with police might cause the account holder. The company deactivated the account and notified the shooter of the action—without filing a report.

The legal filings argue this decision directly violated OpenAI's stated safety commitments and duty of care. The cases could expose the company to significant liability and set a precedent for how AI platforms handle credible threats of violence made through their systems. Courts may be asked to define the boundaries of corporate responsibility when internal safety mechanisms identify a genuine risk of mass harm. The outcome could reshape disclosure obligations for technology companies and influence how the industry balances user privacy against public safety.
---
- **Source**: Ars Technica
- **Sector**: The Lab
- **Tags**: OpenAI, ChatGPT, school shooting, AI safety, lawsuits
- **Credibility**: unverified
- **Published**: 2026-04-29 12:54:07
- **ID**: 78211
- **URL**: https://whisperx.ai/en/intel/78211