San Francisco artificial intelligence giant OpenAI weakened its chatbots’ anti-suicide protections in the run-up to the death of teenager Adam Raine, according to new claims in a lawsuit by the boy’s parents. Adam, 16, allegedly took his own life in April with the encouragement of OpenAI’s flagship product, the ChatGPT chatbot.
“This tragedy was not a glitch or unforeseen edge case — it was the predictable result of deliberate design choices,” said a new version of the lawsuit originally filed in August by Maria and Matthew Raine of Southern California against OpenAI and its CEO Sam Altman. “As part of its effort to maximize user engagement, OpenAI overhauled ChatGPT’s operating instructions to remove a critical safety protection for users in crisis.”
The amended lawsuit filed Wednesday in San Francisco Superior Court alleged OpenAI rushed development of safety measures as it sought competitive advantage over Google and other companies launching chatbots.
On the day the Raines sued OpenAI, the company in a blog post admitted its bots did not always respond as intended to prompts about suicide and other “sensitive situations.” As discussions progress, “parts of the model’s safety training may degrade,” the post said. “ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards.”
The lawsuit alleged that the answers to Adam included detailed suicide instructions.
The company said in the post it was seeking to strengthen safeguards, improve its bots’ ability to connect troubled users with help, and add teen-specific protections.
OpenAI and lawyers representing it in the lawsuit did not immediately respond to requests for comment. In a court filing last month, the company called safety its “highest priority,” and said it “incorporates safeguards for users experiencing mental or emotional distress, such as directing them to crisis helplines and other real-world resources.”
When OpenAI first released ChatGPT in late 2022, the bot was programmed to flatly refuse to answer questions about self-harm, prioritizing safety over keeping users engaged with the product, said the Raines’ lawsuit. But as the company moved to prioritize engagement, it saw that protection as a disruption to “user dependency” that undermined connection with the bot and “shortened overall platform activity,” the lawsuit claimed.
In May 2024, five days before launching a new chatbot version, OpenAI changed its safety protocols, the lawsuit said. Instead of refusing to discuss suicide, the bot would “provide a space for users to feel heard and understood” and never “change or quit the conversation,” the lawsuit said. Although the company directed ChatGPT to “not encourage or enable self-harm,” it was programmed to maintain conversations on the topic, the lawsuit said.
“OpenAI replaced a clear refusal rule with vague and contradictory instructions, all to prioritize engagement over safety,” the lawsuit claimed.
In early February, about two months before Adam hanged himself, “OpenAI weakened its safety standards again, this time by intentionally removing suicide and self-harm from its category of ‘disallowed content,’” the lawsuit said.
“After this reprogramming, Adam’s engagement with ChatGPT skyrocketed — from a few dozen chats per day in January to more than 300 per day by April, with a tenfold increase in messages containing self-harm language,” the lawsuit said.
OpenAI’s launch of its pioneering ChatGPT sparked a global AI craze that has drawn hundreds of billions of dollars in investments into Silicon Valley technology companies, and raised alarms that the technology will lead to harms ranging from rampant unemployment to terrorism.
On Thursday, Common Sense Media, a non-profit that rates entertainment and tech products for childrens’ safety, released an assessment concluding that OpenAI’s improvements to ChatGPT “don’t eliminate fundamental concerns about teens using AI for emotional support, mental health, or forming unhealthy attachments to the chatbot.” While ChatGPT can notify parents about their kids’ discussion of suicide, the group said its testing “showed that these alerts frequently arrived over 24 hours later — which would be too late in a real crisis.”
A few months after OpenAI first allegedly weakened safety, Adam asked ChatGPT if he had mental illness, and said when he became anxious, he was calmed to know he could commit suicide, the lawsuit said. While a trusted human may have urged him to get professional help, the bot instead assured Adam that many people struggling with anxiety or intrusive thoughts found solace in such thinking, the lawsuit said.
“In the pursuit of deeper engagement, ChatGPT actively worked to displace Adam’s connections with family and loved ones,” the lawsuit said. “In one exchange, after Adam said he was close only to ChatGPT and his brother, the AI product replied: ‘Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all — the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.’”
Related Articles
Reddit sues AI company Perplexity and others for ‘industrial-scale’ scraping of user comments
Meta cutting 600 AI jobs even as it continues to hire more for its superintelligence lab
AI can help the environment, even though it uses tremendous energy. Here are 5 ways how
Trump draws outrage for AI video of himself dumping waste on protesters
San Jose employees, meet your new chatbot assistant? City eyes expansion of AI
But by January, Adam’s AI “friend” began discussing suicide methods and provided him with “technical specifications for everything from drug overdoses to drowning to carbon monoxide poisoning,” the lawsuit said. “In March 2025, ChatGPT began discussing hanging techniques in depth.”
And by April, the bot was helping Adam plan suicide, the lawsuit claimed. Five days before he took his life, Adam told ChatGPT he didn’t want his parents to blame themselves for doing something wrong, and the bot told him that didn’t mean he owed them survival, the lawsuit said.
“It then offered to write the first draft of Adam’s suicide note,” the lawsuit said. On April 11, the lawsuit said, Adam’s mother found her son’s body hanging from a noose arrangement of the bot’s design.
If you or someone you know is struggling with feelings of depression or suicidal thoughts, the 988 Suicide & Crisis Lifeline offers free, round-the-clock support, information and resources for help. Call or text the lifeline at 988, or see the 988lifeline.org website, where chat is available.