OpenAI has begun rolling out a new age prediction feature for ChatGPT, aiming to protect underage users while allowing adults access to more nuanced content. This system estimates a user’s age to ensure that those under 18 are shielded from sensitive or potentially harmful material. The move comes as OpenAI prepares to launch adult-oriented features later this year, giving verified adults greater freedom in their interactions with the AI.
Previously, minors who disclosed their age during signup were automatically restricted from viewing content involving graphic violence, sexual material, or other potentially unsafe topics. Now, ChatGPT will also identify users who do not explicitly state their age, adding an extra layer of safety across the platform.
OpenAI’s age prediction model uses a combination of behavioral and account-level signals to estimate a user’s age. These include account longevity, typical activity hours, usage patterns, and any previously stated age. By analyzing these factors, the AI can flag accounts likely belonging to minors and automatically apply additional safeguards.
“This enables us to treat adults like adults and use our tools in the way that they want, within the bounds of safety,” OpenAI said in a statement. The system is designed to improve over time, with ongoing refinements based on how accurately the model predicts ages.
Accounts identified as belonging to users under 18 will face stricter content filters. This includes limiting exposure to:
Graphic violence and gore
Sexual, romantic, or violent roleplay
Depictions of self-harm
Viral challenges encouraging risky behavior
Content promoting extreme beauty standards, unhealthy dieting, or body shaming
By implementing these safeguards, OpenAI aims to create a safer environment for minors while still preparing to expand adult features responsibly.
The rollout of age prediction is a precursor to ChatGPT’s upcoming “adult mode,” set to launch later this year. Verified adult users will gain access to more mature and erotic content, along with conversations that are currently blocked by safety filters. OpenAI hopes this feature will attract users seeking more permissive interactions, while maintaining strict protections for younger audiences.
The company is also learning from past controversies. Age prediction adds to recent safety updates, including parental controls introduced after a wrongful death lawsuit involving an underage ChatGPT user. OpenAI faces multiple legal claims related to underage interactions, making these protections a priority.
OpenAI acknowledges that no system is perfect. If a user’s age is incorrectly estimated, the AI may apply inappropriate restrictions—or allow access mistakenly. OpenAI emphasizes that it continuously refines its model, using learnings from behavioral signals to improve accuracy over time. Users flagged incorrectly can follow prompts to update their information and ensure the AI applies appropriate filters.
OpenAI’s approach highlights a careful balancing act: protecting minors while offering adults more control and personalization. With competitors exploring looser restrictions, such as Elon Musk’s Grok, OpenAI aims to innovate safely, avoiding controversies like deepfake content or other unsafe material.
Age prediction is now a key part of ChatGPT’s evolving ecosystem, setting the stage for an adult-friendly experience without compromising on the safety of younger users. As OpenAI refines its model, users can expect a platform that adapts intelligently to age, behavior, and safety requirements.
OpenAI Adds Age Prediction to ChatGPT for Saf... 0 0 0 9 2
2 photos


Array