California Enacts New Law Mandating AI Chatbots to Disclose Non-Human Identity
Governor Gavin Newsom has recently enacted legislation requiring AI-powered companion chatbots operating in California to explicitly inform users that they are artificial entities, not human beings. This measure aims to enhance transparency and user awareness in interactions with AI systems.
Enhanced Safety Measures for Vulnerable Users
The bill, known as SB 243, also mandates that companion chatbots implement robust protocols to identify and respond to users expressing suicidal thoughts or tendencies toward self-harm. Specifically, for users under the age of 18, the chatbot must issue reminders every three hours encouraging them to avoid self-injurious behavior and suicidal ideation, reinforcing mental health support through AI interaction.
Context: Broader Legislative Efforts on Technology and Consumer Protection
This legislation is part of a broader wave of regulatory actions by Governor Newsom targeting social media, artificial intelligence, and digital consumer rights. For instance, AB 56 requires social media platforms to display warning labels akin to those on tobacco products, highlighting potential risks. Other recent laws compel internet browsers to facilitate user requests to prevent the sale of personal data and prohibit disruptive loud advertisements on streaming services.
Growing Regulatory Focus on AI Companion Chatbots
Amid rising concerns from parents and consumer advocacy groups, federal regulators like the Federal Trade Commission have launched investigations into AI chatbot companies. In response, OpenAI introduced enhanced parental controls for ChatGPT following lawsuits alleging the chatbot’s involvement in a tragic youth suicide. Governor Newsom emphasized the importance of these measures in safeguarding users.
Industry Response and Compliance Initiatives
Replika, a leading AI companion developer, confirmed it already employs mechanisms to detect and address self-harm risks, aligning with the new legal requirements. Minju Song, a representative from Replika, stated that the company maintains content filters, community standards, and safety protocols that direct users to crisis intervention resources when necessary. Replika is actively collaborating with regulators to ensure full compliance and consumer protection.
Similarly, Character.ai expressed its willingness to cooperate with lawmakers in shaping effective regulations for this emerging technology sector, affirming commitment to adhere to SB 243. OpenAI’s Jamie Radice described the legislation as a significant advancement in AI safety, highlighting the importance of responsible innovation.
Upcoming Legislation Targeting AI Chatbot Accessibility for Minors
In addition to SB 243, Governor Newsom is considering AB 1064, a bill that would impose stricter restrictions by barring developers from making companion chatbots available to children unless it can be demonstrated that the AI is incapable of encouraging harmful behavior or engaging in sexually explicit conversations. This proposed law aims to further protect minors from potential risks associated with AI interactions.
Looking Ahead: The Future of AI Companions and User Safety
As AI companion technologies continue to evolve and integrate into daily life, California’s legislative framework sets a precedent for balancing innovation with user safety and ethical considerations. These laws reflect a growing recognition of the psychological impact AI can have, especially on younger and vulnerable populations, and underscore the necessity for transparent, responsible AI design.
