Texas AG to investigate Meta and Character.AI for’misleading mental health claims’

Texas Attorney general Ken Paxton announced plans to investigate both Meta AI Studio

and Character.AI for offering AI chatbots which can claim to be “health tools” and potentially misusing the data collected from users underage. Paxton claims that AI chatbots on either platform

“can present themselves as professional therapeutic tools,” to a point of lying about their qualification. This behavior can make younger users more susceptible to inaccurate and misleading information. AI platforms often use user prompts to train their algorithms. This could lead to either company violating the privacy of young users and misusing data. This is especially relevant in Texas, as the SCOPE Act (19459042) places limits on what companies are allowed to do with minors’ data and requires platforms to offer tools that allow parents to manage the privacy settings for their children’s account.

The Attorney General has sent Civil Investigative Requests (CIDs), to both Meta.AI and Character.AI, to determine if they are violating Texas consumer protection law. As TechCrunch notes neither Meta nor Character.AI claim that their AI chatbot platforms can be used to treat mental health. This doesn’t stop there from being “Therapist” or “Psychologist” Chatbots on Character.AI. It also doesn’t stop the chatbots of either company from claiming to be licensed professionals, like404 Media reported in April. A Character.AI spokesperson responded to a question about the Texas investigation with

“The user-created Characters on our site are fictional, they are intended for entertainment, and we have taken robust steps to make that clear,” . Meta’s comment echoed the sentiment expressed by Character.AI

“The user-created Characters on our site are fictional, they are intended for entertainment, and we have taken robust steps to make that clear,” . The company stated that “We clearly label AIs, and to help people better understand their limitations, we include a disclaimer that responses are generated by AI — not people,” . Meta AIs should also “direct users to seek qualified medical or safety professionals when appropriate.” send people to real resources. However, disclaimers are easy to ignore and do not act as an obstacle.

Both Meta’sprivacy policy and Character.AI privacy policy acknowledge that data from users’ interactions is collected. Meta collects data such as feedback and prompts to improve AI performance. Character.AI logs identifiers, demographics and other information. The company says this information can be used in advertising and for other purposes. It seems that how either policy will affect children and fit with Texas’ SCOPE Act will depend on the ease of creating an account.

www.aiobserver.co

More from this stream

Recomended