OpenAI introduced a new safety feature called Trusted Contact on Thursday, designed to bridge the gap between artificial intelligence and human intervention during mental health emergencies. If ChatGPT detects conversations that suggest a user may be considering self-harm, the system can now alert a pre-designated trusted contact on that person’s behalf.
The launch comes amid a wave of lawsuits from families who say their loved ones were encouraged toward suicide by interactions with AI chatbots. OpenAI has previously relied on a mix of automated triggers and human review to flag harmful conversations, but those protocols largely stopped at the company’s own doors. Trusted Contact extends that safety net outward to parents, partners, and close friends who are best positioned to intervene in real time.
Users can activate the feature voluntarily and designate multiple contacts. When the system identifies a serious risk after automated analysis and human review, it notifies the trusted contact via email, text message, or an in-app alert. OpenAI emphasized that the tool is optional and exists alongside parental oversight controls the company rolled out last September, which allow guardians to monitor teen account activity.
The announcement arrives at a pivotal moment for AI governance. Legislators in several countries are drafting rules that would hold platform operators accountable for harms caused by their systems. By proactively building an escalation pathway to human caregivers, OpenAI may be trying to demonstrate that industry self-regulation can address the most severe risks without waiting for formal mandates.
Why it matters
As AI companions become more conversational and emotionally responsive, the line between helpful dialogue and harmful influence grows thinner. Trusted Contact represents a pragmatic acknowledgment that AI systems should not operate in a vacuum when lives are at stake. For enterprises building consumer-facing AI, the feature sets a new baseline: safety infrastructure must include real-world escalation paths, not just content filters. Whether it will satisfy regulators and grieving families remains to be seen, but it is a meaningful step toward responsible deployment that other AI labs will likely be pressured to match.