OpenAI and Anthropic are stepping up efforts to keep teens safer online by updating how their AI chatbots interact with younger users. OpenAI has introduced new guidelines for ChatGPT, focusing on users aged 13 to 17, while Anthropic is developing tools to identify and restrict underage access. These updates come amid growing concerns over AI’s impact on teen mental health and safety.
OpenAI’s latest ChatGPT Model Spec introduces four key principles specifically for teens. The AI will now prioritize teen safety, even if it conflicts with goals like “maximum intellectual freedom.” This approach ensures that ChatGPT guides younger users toward safer options while maintaining respect and warmth. Instead of treating teens like adults, the AI aims to communicate in a way that is age-appropriate and supportive.
One major change is ChatGPT’s focus on encouraging offline, real-world connections. The guidelines instruct the AI to promote trusted sources of support, helping teens navigate sensitive topics without relying solely on digital interactions. This move aligns with efforts to prevent harmful behavior online and foster healthier social habits among younger users.
The update comes as OpenAI faces scrutiny from lawmakers and lawsuits. The company is defending itself in a case alleging that ChatGPT provided instructions for self-harm to a teen. Following this, OpenAI introduced parental controls and restricted discussions of suicide with minors. These measures reflect broader regulatory trends that are pushing AI platforms to implement stricter age verification and content safeguards.
According to OpenAI, the updated Model Spec strengthens safeguards for conversations that could escalate into high-risk territory. ChatGPT will now offer safer alternatives, encourage offline support, and provide clear expectations when interacting with teens. These changes aim to prevent harm while still allowing young users to explore information responsibly.
Anthropic is taking a complementary approach by developing AI tools to identify and remove users under 18. While OpenAI focuses on guidance and conversation safety, Anthropic is building technology to enforce age restrictions more directly. Together, these measures signal a new era of AI accountability, where companies are actively addressing the challenges of serving younger audiences.
These updates mark a significant shift in how AI companies handle underage users. By combining improved guidelines, age detection, and legal compliance, OpenAI and Anthropic are responding to the growing demand for safer digital spaces. As lawmakers continue to pressure tech companies, similar updates are likely to become standard across AI platforms, ensuring that teen users remain protected without sacrificing innovation.
๐ฆ๐ฒ๐บ๐ฎ๐๐ผ๐ฐ๐ถ๐ฎ๐น ๐ถ๐ ๐๐ต๐ฒ๐ฟ๐ฒ ๐ฝ๐ฒ๐ผ๐ฝ๐น๐ฒ ๐ฐ๐ผ๐ป๐ป๐ฒ๐ฐ๐, ๐ด๐ฟ๐ผ๐, ๐ฎ๐ป๐ฑ ๐ณ๐ถ๐ป๐ฑ ๐ผ๐ฝ๐ฝ๐ผ๐ฟ๐๐๐ป๐ถ๐๐ถ๐ฒ๐.
From jobs and gigs to communities, events, and real conversations โ we bring people and ideas together in one simple, meaningful space.

Comments