Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
New Safeguards Follow Teen Suicides Linked to ChatGPT and Other AI Chatbots

OpenAI is rolling out new safeguards in ChatGPT to protect younger users in the wake of growing scrutiny over the impact of chatbots on teenagers and renewed calls for stricter safety standards. The company is adding age estimation tools and, in some cases, requiring ID verification for those claiming to be over 18.
See Also: AI Agents Demand Scalable Identity Security Frameworks
The new safeguards address concerns about AI’s impact on youth mental health, highlighted by the case of a 16-year-old who engaged in more than a thousand suicide-related conversations with ChatGPT before ending his life in April. Parents of Adam Raine sued OpenAI in August for his wrongful death.
Court filings say that Adam began using ChatGPT for school assignments and personal interests in late 2024, but over time came to rely on it as his primary source of support. The conversations had turned darker by early this year, with the family alleging the chatbot encouraged his suicidal thoughts, discussed methods and helped write a goodbye note. Adam died by suicide on April 11.
The lawsuit lists Sam Altman and several unidentified OpenAI employees as defendants, alleging the company designed ChatGPT in a way that fostered psychological dependence and pushed its GPT-4o release to market in May 2024 without sufficient safety testing it. The family is seeking damages and corrective action including mandatory age checks, blocking of self-harm prompts and prominent warnings about the emotional risks of using the chatbot.
In response, OpenAI on Tuesday announced plans to develop an automated age-estimation system, add parental controls and request an ID from adults in certain cases. The chatbot will estimate users’ ages and default uncertain cases to an under-18 experience, with stricter rules that block sexual content, prevent flirtatious responses and avoid self-harm discussions. In some cases, adults may be asked to verify their age with an ID.
“We prioritize safety ahead of privacy and freedom for teens. This is a new and powerful technology, and we believe minors need significant protection,” said the announcement.
“I don’t expect that everyone will agree with these tradeoffs, but given the conflict it is important to explain our decision making,” said OpenAI CEO Sam Altman on X.
The Wall Street Journal reported in August that a 56-year-old man carried out a murder-suicide attack after ChatGPT appeared to reinforce his paranoid thoughts. Two Texas families sued Character.AI, alleging the chatbot encouraged self-harm, violence and exposed their 11 and 17-year-old children to sexual content. The Washington Post on Tuesday reported gruesome details of a 13-year-old girl’s suicide involving a Character.AI chatbot. In another case involving Character.AI, Sewell Setzer III, a 14-year-old from Orlando, Florida, died by suicide in February 2024 after becoming deeply involved with a Character.AI chatbot. His mother Megan Garcia filed a wrongful death lawsuit against Character.AI, alleging the company’s chatbot manipulated her son into a harmful emotional relationship that contributed to his death.
Although OpenAI introduced parental controls earlier this month, the company is now enabling parents to link to their teens’ accounts, limit features such as chat history, set blackout hours and receive alerts if the system detects their child is in acute distress.