ChatGPT rolls out parental controls and safety notifications

ChatGPT rolls out parental controls and safety notifications


ChatGPT is rolling out a Parental Control feature that’s available to all users starting today (30 September 2025).

As highlighted on its parent company’s (OpenAI) official blog, the new Parental Controls for ChatGPT help in linking a family’s ChatGPT accounts together, and to input settings that help with a “safe, age-appropriate” experience.

How to set up Parental Controls in ChatGPT

To begin, you will need at least two ChatGPT accounts: one will be controlled by the parent, and the subsequent ones will be for their ward or offspring.

The “add child” button and how linked accounts will look like under ChatGPT’s Parental Controls.

OpenAI

The parent’s ChatGPT account sends an invitation to the ward’s account, and the latter will need to accept it.

Once accepted, they are now linked with Parental Controls. The ward can also unlink their accounts when needed, but the parent account will be notified if that happens.

What are the limits of ChatGPT’s Parental Control?

Automatic protections come in the form of reduced “graphic content, viral challenges, sexual, romantic, or violent roleplay, extreme beauty ideals”, among other reductions. Parents (and their accounts) can disable these settings, while the linked child account cannot adjust them.

A sample of Parental Control settings available in ChatGPT.

OpenAI

Other Parental Controls that can be enabled separately are Quiet hours (specific timeslots where ChatGPT cannot be used), the disabling of Voice Mode (by removing that option entirely), disabling memory (won’t save or use memories when responding), disabling image generation, and opting out of training ChatGPT’s models.

Parent accounts also get new Safety Notifications. Should ChatGPT realise that a child is using their account with harmful tendencies, the content will be reviewed by ChatGPT’s internal team of trained mental health and teen specialists. If the harm is acute, a notification goes out to the parent via email, text message, or push alerts on their phone. Parents can opt out of these notifications.

ChatGPT cautions that the system is still a work in progress, so it is still refining its approach to identifying signs of distress. For example, it has yet to implement reaching out to emergency services, and the blog post does not address situations where a child is experiencing domestic harm from their own parents.

The service also goes further to reassure users that children will still have privacy, and such information will only be shared when absolutely necessary.

What is ChatGPT doing to enhance user experiences, particularly for young users?

ChatGPT has also created a resource page that explains how Parental Control and Safety Notifications work, as well as outlining all the tools available within its Parental Control settings.

In the future, ChatGPT may have an age-prediction system that automatically detects if a user is under the age of 18, and automatically applies “teen-appropriate” settings.

Source: ChatGPT (OpenAI)



Visit Source