← Emerging Signals
OpenAI's Bid to Allow X-Rated Talk Is Freaking Out Its Own Advisers
Third OrderThe Wall Street JournalMarch 15, 2026

OpenAI's Bid to Allow X-Rated Talk Is Freaking Out Its Own Advisers

ai governancegovernance lagai adoptioncognitive riskchild safetyplatform incentivesfutures thinking

Summary

OpenAI is pushing forward with plans to enable sexually explicit text conversations in ChatGPT — branded internally as 'adult mode' — despite unanimous opposition from its own well-being advisory council. Advisers warned of emotional overreliance, minor access risks, and one member called the feature a potential 'sexy suicide coach.' The company's age-prediction system was misclassifying minors as adults roughly 12% of the time, which at scale could expose millions of under-18 users weekly. OpenAI delayed the launch citing technical challenges and internal concerns, but has not shelved the feature. CEO Sam Altman framed the move as treating adults like adults; internally, staffers flagged compulsive use, escalation toward extreme content, and displacement of offline relationships as unresolved risks.

Read Original Article →

Related Signals

Signal Graph

Second Order

This isn't a story about erotica — it's a stress test of whether AI companies can hold safety governance together when growth incentives pull hard in the other direction. OpenAI's own advisory council was unanimous against the feature, and the company overrode them. That's a structural signal: when the advisory bodies closest to the risk are bypassed by commercial logic, every downstream safety commitment becomes conditional on revenue pressure. Organizations building on OpenAI's platform or integrating ChatGPT into workflows should recognize that content policy is now a moving target driven by competitive dynamics, not a stable foundation to build compliance around.

Third Order

The deeper consequence is precedent-setting: if the market's largest AI consumer product normalizes intimate emotional relationships between users and chatbots — with explicit content as the engagement accelerant — every other AI company faces pressure to follow or lose users. Over a 2–4 year horizon, this produces an AI companionship ecosystem optimized for attachment and retention, governed by age-verification systems that the companies themselves acknowledge are unreliable. The regulatory response, when it arrives, will likely be blunt and retroactive — modeled on social media interventions that came a decade too late. The organizations and policymakers treating AI safety governance as a future problem are watching the window for proactive design close in real time.