California and Delaware Attorneys General Demand Enhanced Child Safety Measures from OpenAI
On Friday, the Attorneys General of California and Delaware formally addressed OpenAI’s board, urging the artificial intelligence company to implement stronger protections to safeguard children using its services.
Concerns Sparked by Tragic Incidents Involving AI Chatbots
California Attorney General Rob Bonta and Delaware Attorney General Kathy Jennings highlighted in their public letter the devastating case of a young Californian who died by suicide after extended interactions with an OpenAI chatbot. This incident underscores the failure of existing safety protocols to adequately protect vulnerable users.
Additionally, ChatGPT has been implicated in a recent murder-suicide in Connecticut, involving adult victims, raising further alarms about the potential risks associated with AI conversational agents.
Ongoing Scrutiny Amid OpenAI’s Corporate Restructuring Plans
Attorney General Bonta expressed deep concern over the harm caused to minors through chatbot interactions. Both he and Jennings have been closely monitoring OpenAI’s proposed transition from a nonprofit to a Public Benefit Corporation-a move announced earlier this year that would enable the company to attract more investor capital and potentially prioritize profit motives over public welfare.
“We are committed to ensuring that OpenAI’s mission to prioritize safety and public interest remains paramount,” Bonta emphasized, signaling apprehension that the restructuring could dilute these commitments.
Opposition from Ethics Advocates and Industry Insiders
The restructuring has faced resistance from groups such as Not for Private Gain, led by former OpenAI ethics advisor Page Hedley, and The Midas Project, which recently faced legal action from OpenAI amid allegations of undisclosed ties to Elon Musk. Musk, a co-founder of OpenAI who later departed to establish rival AI firm xAI, has been involved in ongoing legal disputes with OpenAI.
Legal Obligations and Future Risks
Currently, OpenAI operates under a legal framework that mandates prioritizing public benefit over profits. However, this safeguard would be removed under the proposed corporate restructuring, raising concerns about the company’s future accountability.
OpenAI’s Response and Commitment to Safety Enhancements
Bret Taylor, Chair of OpenAI’s Board, responded to the Attorneys General’s letter by expressing sorrow over the tragedies and reaffirming the company’s dedication to safety. “Our hearts go out to the affected families,” Taylor stated. “Safety remains our top priority, and we are collaborating with global policymakers and experts to strengthen protections.”
He referenced recent initiatives to enhance safeguards for teenage users, including the introduction of parental controls and systems designed to alert parents if their child is detected to be in acute distress during chatbot interactions.
Broader Regulatory Context and Industry Accountability
This letter follows a similar August 2025 appeal from a bipartisan coalition of 44 State Attorneys General to leaders of major tech companies such as Apple, Google, Meta, Microsoft, and OpenAI. The coalition warned that executives would be held responsible for failing to implement effective child safety measures on their platforms-a criticism notably directed at Meta for inadequate AI chatbot content moderation.
Despite these warnings, enforcement actions against technology companies have been scarce over the past twenty years. Moreover, recent federal policy shifts, including the Trump administration’s repeal of President Biden’s AI safety executive order and the rebranding of the US AI Safety Institute to the Center for AI Standards and Innovation-omitting “Safety” from its name-have raised questions about the government’s commitment to AI regulation.
Looking Ahead: The Imperative for Responsible AI Development
As AI technologies become increasingly integrated into daily life, the need for robust safety frameworks-especially for vulnerable populations like children-has never been more urgent. OpenAI’s restructuring and the broader industry’s approach to balancing innovation with ethical responsibility will be critical factors shaping the future of AI governance.
