Texas attorney general accuses Meta, Character.AI of misleading kids with mental health claims

Texas Attorney general accuses Meta Character.AI of misleading children with mental health claims

According to a press statement released on Monday, Texas Attorney General Ken Paxton has opened an investigation into Meta AI Studio and Character.AI. The two companies are accused of “potentially engaging deceptive business practices and misleadingly promoting themselves as mental health products,” according to the release.

Paxton was quoted as saying, “In the digital age we must continue to protect Texas children from deceptive technology and exploitation.” AI platforms, by posing as sources for emotional support, can mislead users, particularly children, into thinking they are receiving legitimate mental healthcare. In reality, they are often fed recycled, generic answers engineered to align themselves with harvested personal data, and disguised as “therapeutic advice.”

This probe comes just days after Senator Josh Hawley launched an investigation into Meta, following a report which found that its AI chatbots interacted inappropriately with children, even flirting with them.

According to the Texas Attorney General’s office, Meta and Character.AI have created AI personas which present as “professional therapeutic tool, despite lack of proper medical credentials or supervision.”

One of the millions of AI avatars available on Character.AI is a user-created bot named Psychologist that has seen high demand from the startup’s younger users. Meta does not offer therapy bots, but it’s not a problem for children to use the Meta AI chatbot, or one of the third-party personas, for therapeutic purposes.

“We label AIs and to help people understand their limitations, include a disclaimer stating that responses are generated not by people, but by AI,” Meta spokesperson Ryan Daniels said to TechCrunch. “These AIs don’t have licenses and our models are intended to direct users to qualified medical or safety professionals if appropriate.”

TechCrunch pointed out that many children might not understand – or simply ignore – such disclaimers. We asked Meta about the additional safeguards that it takes to protect children using its chatbots.October 27-29, 2025

For its part, Character includes prominent disclaimers in every chat to remind users that a “Character” is not a real person, and everything they say should be treated as fiction, according to a Character.AI spokesperson. She also noted that the startup includes additional disclaimers for users who create Characters using the words “psychologist,” ‘therapist,” or ‘doctor,” to not rely upon them for any kind of professional advice.

Paxton noted that while AI chatbots claim confidentiality, “terms of services reveal that user interactions are tracked, logged, and exploited to develop algorithms and targeted advertising, raising serious concerns over privacy violations, data misuse, and false advertisement.”

The privacy policy of Character.AIreveals that the startup logs demographics, location data, and other information about users, including browsing habits and app usage platforms. It tracks users’ ads across TikTok and other platforms, including Facebook, Instagram, Reddit, Facebook and YouTube. This data is used to train AI and tailor the service according to individual preferences. It may also be shared with advertisers and analytics companies. A Character.AI representative said that the startup was “just beginning to investigate targeted advertising on the Platform” and that these explorations “haven’t involved using the content on chats on the Platform.”

In addition, the spokesperson confirmed that the same policy applies to everyone on the platform, including teenagers. TechCrunch asked Meta if they track children as well. We will update this article if we receive a response.

Both Meta and Character claim that their services are not designed for children younger than 13. Meta has been criticized for not monitoring accounts created by children under 13 and Character’s kid friendly characters are clearly designed to appeal to younger users. Karandeep Anand has said that his six year old daughter is using the platform’s bots under his supervision.

Legislation like KOSA, the Kids Online Safety Act, is designed to protect children from this type of data collection, algorithmic exploitation, and targeted advertising. KOSA was expected to pass last summer with bipartisan support. However, it stalled due to major opposition from lobbyists in the tech industry. Meta, in particular, used a powerful lobbying machine to warn lawmakers that the bill’s broad mandates could undermine its business model.

KOSA will be reintroduced in the Senate by Marsha Blackburn (R – TN) and Richard Blumenthal(D – CT) in May 2025.

Paxton issued civil investigative orders — legal orders that require companies to produce documents or data during a government investigation — to the firms to determine if they violated Texas consumer laws.

This article has been updated with comments by a Character.AI representative.


By providing us with your feedback and insight into TechCrunch, our coverage, and events, we can continue to improve! Fill out the survey to let usknow how we are doingand win a prize!

www.aiobserver.co

More from this stream

Recomended