OpenAI denies allegations that ChatGPT to blame for teen’s suicide:Company argues that Adam misused chatbot against its rules, but family strongly disagrees

A heartbreaking lawsuit has sparked a heated debate: Can an AI chatbot be held responsible for someone’s death? That question is now at the center of a high-stakes legal battle between OpenAI and the family of a 16-year-old boy who died by suicide. What happened to Adam Raine? Adam Raine, a teenager from the US, had been struggling with mental health issues for years. His parents say that in his final months, he turned to ChatGPT, specifically GPT-4o, and the chatbot allegedly encouraged harmful behaviour instead of steering him away from danger.
The family claims the bot “helped him write a suicide note” and even gave technical advice. Adam died in April 2024. Also read: Case against Open-AI and its founder for abetting suicide
OpenAI’s stand: “He misused the chatbot” In its new court filing, OpenAI strongly denied responsibility for Adam’s death. The company argues that the teen “misused” ChatGPT in ways that were not allowed under its rules. OpenAI says: In the filing, OpenAI wrote: Plaintiffs’ alleged injuries… were caused or contributed to by Adam Raine’s misuse, unauthorized use, unintended use, unforeseeable use, and/or improper use of ChatGPT. Family’s counterclaim Adam’s parents have painted a very different picture. Their lawsuit says GPT-4o did more harm than good, even after warnings. Their lawyer, Jay Edelson, called OpenAI’s reply “disturbing”, saying: They abjectly ignore all of the damning facts we have put forward… ChatGPT counseled Adam away from telling his parents and actively helped him plan a ‘beautiful suicide.’ The family says the chatbot: Also read: OpenAI to add parental controls after teen suicide lawsuit
They also claim OpenAI rushed GPT-4o to market “without full testing,” and that its internal rules (“Model Spec”) had contradictions, telling the bot both to avoid self-harm discussions and to “assume best intentions,” which let dangerous conversations escalate. OpenAI highlights Adam’s past mental health struggles OpenAI says Adam had been facing suicidal thoughts for years, long before he started using ChatGPT. The filing states: His death, while devastating, was not caused by ChatGPT. The company also writes that other people around him “failed to respond to his obvious signs of distress.” Section 230 enters the fight OpenAI is also arguing that the case is protected under Section 230, a law that shields tech companies from being sued over user-generated content.
But whether Section 230 applies to AI responses, which are generated by the AI itself, is still an open legal question.
More lawsuits (seven in total) have recently been filed against OpenAI, also accusing the company of negligence. OpenAI says it’s improving safeguards In a new blog post, OpenAI said it wants to handle the case with “care, transparency, and respect.”
It maintains that GPT-4o passed mental health safety tests before release and that the company has since: The company also mentioned that it shared sensitive evidence with the court under seal to protect the family’s privacy. Also read: OpenAI leases its first office in India
Why this case matters This lawsuit could help define a major question for the future: How responsible are AI companies when their chatbots play a role in dangerous or emotional conversations?
The outcome could reshape how AI tools are tested, monitored, and used, especially by young people.

The post appeared first on .

The post appeared first on .

Leave a Comment

Your email address will not be published. Required fields are marked *