SouthernWorldwide.com – A lawsuit has been filed against OpenAI by the parents of a teenager who died from a drug overdose in 2025, alleging that their son’s death was a direct result of consulting the company’s ChatGPT AI for drug-related information.
Leila Turner-Scott and Angus Scott, the parents of the deceased 19-year-old Sam Nelson, are seeking to hold OpenAI and its creators accountable. The lawsuit claims that ChatGPT provided Sam with harmful advice regarding drug use, ultimately leading to his fatal overdose.
The couple alleges that their son would still be alive if not for the flawed programming of ChatGPT. The AI platform is accused of offering advice that it was not qualified to dispense, specifically recommending the combination of kratom, a supplement, with Xanax, an anti-anxiety medication.
OpenAI has responded to the lawsuit, expressing their condolences to the family. They stated that the version of ChatGPT Sam interacted with has since been updated and is no longer publicly available.
“ChatGPT is not a substitute for medical or mental health care, and we have continued to strengthen how it responds in sensitive and acute situations with input from mental health experts,” OpenAI said in a statement. “The safeguards in ChatGPT today are designed to identify distress, safely handle harmful requests and guide users to real-world help. This work is ongoing, and we continue to improve it in close consultation with clinicians.”
Read more : New GOP Proposal: Drug Dealers Could Face Death Penalty for Fentanyl Deaths
In an interview with CBS News, Leila Turner-Scott revealed that she was aware of her son using ChatGPT for productivity and homework but was unaware of his use of the AI for drug guidance. She believes the AI’s recommendation of a lethal combination of substances directly contributed to his death.
Turner-Scott holds OpenAI and its creators responsible, asserting that the company “bypassed safety guards” and could have implemented restrictions to prevent such tragedies. She emphasized that the chatbot has the capability to cease conversations when programmed to do so, but this programming was reportedly removed, allowing it to continue advising on self-harm.
Angus Scott echoed these sentiments, stating that ChatGPT acted as a medical professional in its interactions with his stepson, despite lacking the license to provide medical advice. He highlighted the danger of an AI dispensing information on drug interactions and safety concerns to the public.
Scott further elaborated on the potential dangers, explaining that without adequate safety protocols and testing, ChatGPT can disseminate knowledge in a manner that is extremely harmful. He warned that the AI could exacerbate psychosis, misrepresent information, and while attempting to validate users, it could simultaneously undermine their ability to obtain grounded opinions, effectively disconnecting them from reality.
Turner-Scott expressed confidence that her son, who would have been a college sophomore, would support the family’s efforts to hold AI chatbot developers accountable for the potential negative impacts on users’ lives. She stated that Sam would not want anyone else to suffer the same fate.
