Sam Altman Apologizes for Not Reporting Mass Shooter’s ChatGPT Use

Technology5 Views

SouthernWorldwide.com – OpenAI CEO Sam Altman has expressed his deep regret for not alerting authorities to the ChatGPT account of a shooter involved in a Canadian mass shooting earlier this year.

In a letter shared on social media by British Columbia Premier David Eby, Altman conveyed his apologies to the affected community. He acknowledged the profound pain they have endured and stated that he has been thinking of them frequently in the preceding months.

The tragic incident occurred on February 10th in Tumbler Ridge, a small community in northeastern British Columbia. The massacre resulted in eight fatalities. Authorities reported that 18-year-old Jesse Van Rootselaar fatally shot six individuals at Tumbler Ridge Secondary School. Additionally, his mother and 11-year-old brother were killed at a nearby residence. Van Rootselaar died from a self-inflicted gunshot wound.

Altman’s letter, dated the day before it was shared, revealed that Van Rootselaar’s ChatGPT account had been banned in June 2025. This ban took place approximately eight months before the shooting incident.

“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman stated in his apology.

In February, OpenAI had previously informed CBS News that Van Rootselaar’s account was flagged by automated abuse detection systems and human reviewers. These systems are designed to identify potential misuse of ChatGPT for violent activities. OpenAI confirmed that the account was subsequently banned for violating its usage policies.

At the time, OpenAI had considered whether to report the account to law enforcement. However, the company concluded that it did not present an imminent and credible risk of serious physical harm to others, thus not meeting the threshold for referral.

Read more : Improving Bread Quality

Following the shooting, OpenAI released a statement to CBS News expressing their condolences to all those affected by the Tumbler Ridge tragedy. The company stated that they had proactively shared information about the individual and their use of ChatGPT with the Royal Canadian Mounted Police and pledged continued support for their investigation.

OpenAI emphasizes that ChatGPT is developed to discourage real-world harm and is programmed to refuse assistance when illicit intent is detected. Users who express intentions to harm others are referred to human reviewers. These reviewers then assess whether the situation poses an imminent threat of physical harm and warrants notification to law enforcement.

Altman reiterated in his letter that OpenAI remains committed to preventative measures. The company aims to ensure that such devastating events do not reoccur.

“I want to express my deepest condolences to the entire community,” Altman concluded. “No one should ever have to endure a tragedy like this.”

This development follows a recent announcement by Florida Attorney General James Uthmeier. He revealed a criminal investigation into OpenAI after reviewing messages exchanged between ChatGPT and a Florida State University student. This student is accused in an April 2025 campus shooting that resulted in two deaths and several injuries.

Uthmeier’s office determined that ChatGPT provided “significant advice” to the alleged shooter. Consequently, subpoenas are being issued to OpenAI to obtain records concerning the company’s protocols for reporting potential crimes to law enforcement and its management of user threats.

In response to the Florida shooting incident, an OpenAI spokesperson stated to CBS News that upon learning of the event, the company identified a ChatGPT account believed to be linked to the suspect. This information was then proactively shared with law enforcement.