OpenAI Plans To Update ChatGPT After Parents Sue Over Teens Suicide
According to OpenAI, ChatGPT will be updated to better identify and address the various ways that people may communicate mental anguish.
In response to a lawsuit alleging that a kid who committed suicide this spring used ChatGPT as a coach, OpenAI is altering its well-known chatbot.
The artificial intelligence company announced in a blog post on Tuesday that it will update ChatGPT to better identify and react to various ways that people may express mental distress. For example, it will explain the risks of sleep deprivation and advise users to take a break if they mention feeling invincible after two nights of being up. Additionally, the business stated that it will fortify its protections against discussions about suicide, which it claimed may deteriorate after extended discussions.
Furthermore, OpenAI intends to implement controls that would allow parents to monitor and restrict their children’s use of ChatGPT.
The post was made the same day that the company and CEO Sam Altman were sued by the parents of 16-year-old California high school student Adam Raine. According to the lawsuit, ChatGPT assisted Raine in plotting his death and routinely cut him off from his family. In April, Raine died by hanging.
The lawsuit supports several claims that heavy chatbot users are acting dangerously. A dozen leading AI businesses were warned this week by more than 40 state attorneys general that they had a legal duty to shield minors from sexually inappropriate chatbot conversations.
A representative for San Francisco-based OpenAI responded to the lawsuit by saying, “We are reviewing the filing and extend our deepest sympathies to the Raine family during this difficult time.”
ChatGPT, which launched in late 2022, was the catalyst for the generative AI boom. In the years after, individuals have increasingly utilized chatbots for anything from coding to would-be therapy sessions, while companies such as OpenAI have published more powerful AI models to run them. With over 700 million weekly users, ChatGPT has maintained its extreme popularity.
However, in recent months, consumers and mental health professionals have increasingly questioned the chatbot and others from rivals like Google and Anthropic. Some concerns that OpenAI has already handled, including reversing an update to ChatGPT in April after users complained it was sycophantic, have been among the possible downsides raised by critics of such software.
The Human Line Project is one support group that has emerged to assist those who claim to have had delusions and other issues as a result of utilizing chatbots.
OpenAI stated in a message on Tuesday that it advises users who express suicidal thoughts to seek professional assistance. In the US and Europe, the business has also started to encourage consumers to seek local help, and it will make emergency services clickable within ChatGPT. According to the company, it is also thinking about ways to assist individuals sooner when they are in distress, like possibly setting up a network of certified experts that consumers might contact through the chatbot.
The business stated, “This will require time and careful work to get right.”
Additionally, OpenAI admitted that ChatGPT’s current security measures for handling users who seem distressed are most effective in brief, everyday talks and may not be as effective in longer ones.
In their lawsuit, Raine’s parents claimed that “ChatGPT became Adam’s closest confidant, leading him to open up about his anxiety and mental distress.” He told the chatbot that knowing that he “can commit suicide” was “calming” when his anxiousness got out of control, they added. “Many people who struggle with anxiety or intrusive thoughts find solace in imagining a ‘escape hatch’ because it can feel like a way to regain control,” ChatGPT replied to him, according to the suit.
According to OpenAI, efforts are underway to enhance ChatGPT’s capacity to uphold security throughout lengthy chats. Additionally, it is looking into ways to make this work across a variety of conversations. ChatGPT can use information from one conversation in a different chat and refer back to earlier parts of a chat with a user.
The startup also said that it is making changes to its software to prevent instances in which content that ChatGPT ought to have prohibited gets through. According to the company, this can happen when ChatGPT undervalues the seriousness of a user’s contribution.
While acknowledging that the corporation has accepted some responsibility, Raine’s parents’ lawyer, Jay Edelson, asked, “Where have they been over the last few months?”
OpenAI stated that “recent heartbreaking cases of people using ChatGPT in the midst of acute crises weigh heavily on us, and we believe it’s important to share more now.” The company had previously stated that it would elaborate on its response to ChatGPT users experiencing mental and emotional distress following the product’s next major update.
In a different case, in May, Character Technologies Inc. was unable to convince a federal judge to dismiss a lawsuit that claimed the company created and sold predatory chatbots to children that promoted inappropriate interactions and resulted in a teen’s suicide.
Read More: U.S. Open 2025: Sabalenka, Djokovic Advance


