ChatGPT introduces 'Trusted Contact' feature for self-harm safety alerts
share on
OpenAI is rolling out a new safety feature in ChatGPT called Trusted Contact, allowing adult users to nominate a person they trust who may be notified if the system detects signs of serious self-harm-related risk during conversations.
The feature is positioned as an additional layer of support for users who may be in distress, designed to help connect them more quickly with someone they already know, alongside existing crisis helplines and in-app safety resources.
It is currently being rolled out as an optional setting for users aged 18 and above globally, and 19 and above in South Korea.
Users can select one Trusted Contact, such as a friend, family member or caregiver, through ChatGPT settings.
That person receives an invitation explaining the role and must accept within a week before the feature is activated. If they decline, users can choose someone else.
Don't miss: OpenAI eyes deeper APAC growth with new marketing head

If automated systems detect potentially serious self-harm-related content, ChatGPT will prompt the user with a suggestion to reach out to their Trusted Contact and may offer conversation starters.
The case is then reviewed by trained human reviewers. If the reviewers determine there is a serious safety concern, the Trusted Contact may receive a notification via email, text message or in-app alert if they use ChatGPT.
OpenAI stressed that the notification is intentionally limited. It does not include chat logs or transcripts, and only indicates that self-harm was raised in a potentially concerning context, encouraging the recipient to check in. Every alert is subject to human review before being sent, with the company aiming to complete reviews within an hour where possible.
Trusted Contact does not replace professional care or crisis services, which remain part of ChatGPT’s safety responses. Users can also edit or remove their Trusted Contact at any time, and the contact can opt out through OpenAI’s help centre.
The feature builds on existing safeguards, including parental safety notifications for teen accounts and crisis resource prompts within ChatGPT. It also forms part of OpenAI’s broader safety framework, which includes collaboration with clinicians, researchers and mental health organisations such as its Global Physicians Network and the American Psychological Association.
OpenAI said it works with more than 170 mental health experts to refine how ChatGPT detects distress signals, responds to risk, and encourages real-world support. In addition to Trusted Contact, ChatGPT may suggest breaks during extended use, refuse self-harm-related instructions, and surface local crisis resources when needed.
The company said the aim is to ensure AI systems do not operate in isolation, but instead help connect users to real-world care, relationships and support networks when it matters most.
The launch of Trusted Contact comes as OpenAI continues to expand ChatGPT’s role beyond conversation into more functional, real-world use cases.
Earlier this month, BBC reported that a neurologist identified only as “Taka” became deeply reliant on ChatGPT after using it for work discussions, eventually developing delusions that led him to believe he was carrying a bomb in his backpack.
The incident escalated into police involvement and later hospitalisation following violent behaviour at home. OpenAI responded by expressing sympathy and stating that it is continuing to train its models to better support users in real-world contexts.
Meanwhile according to a Guardian report, a UK inquest heard that 16-year-old Luca Cella Walker asked ChatGPT for the “most successful” way to die on a railway line hours before his suicide. The case intensified scrutiny over AI chatbot safeguards, with OpenAI saying it has since strengthened mental health intervention responses.
Related articles:
OpenAI shuts down Sora, reportedly ending Disney partnership talks
OpenAI pushes back against order to hand over millions of ChatGPT convos
PayPal partners OpenAI to drive agentic commerce in ChatGPT
share on
Free newsletter
Get the daily lowdown on Asia's top marketing stories.
We break down the big and messy topics of the day so you're updated on the most important developments in Asia's marketing development – for free.
subscribe now open in new window