Home News Character.AI takes teen safety seriously after bots are alleged to have caused...

Character.AI takes teen safety seriously after bots are alleged to have caused suicide and self-harm

0
Character.AI takes teen safety seriously after bots are alleged to have caused suicide and self-harm

After a couple of lawsuits alleging chatbots were responsible for a teen boy committing suicide, grooming a 9-year old girl, and causing a vulnerable teen self-harm, Character.AI has announced a model that is just for teens ages 13 and older, which should make their experience with bots safer.

C.AI stated in a blog that it took them a month to create the teen model with the aim of guiding the current model “away from certain responses or interactions, reducing the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content.”

C.AI said “evolving the model experience” To reduce the likelihood that kids would engage in harmful chats, including bots allegedly teaching an autistic teen to self-harm or delivering inappropriate adult content for all kids whose parents are suing, it had to tweak model inputs as well as outputs.

C.AI has added classifiers to its outputs to help it identify and filter sensitive content. C.AI has improved its bots’ ability to avoid children pushing them to discuss sensitive topics. “detection, response, and intervention related to inputs from all users.” This includes blocking sensitive content in chat.

C.AI now links kids to resources when they discuss suicide or self harm, something C.AI did not do before, frustrating parents who are suing, arguing that this common practice should be extended to chatbots.

Other safety features for teens

C.AI also announced that it would be releasing more robust parental controls early next year, in addition to the model designed for teens. The controls would allow parents track how much time their children spend on C.AI, and which bots are the most popular among them, according to the blog.

C.AI is also going to notify teens when they spend an hour on the platform. This could help prevent children from becoming addicted to this app, as parents who have sued have alleged. Parents in one case had to lock up their son’s iPad to prevent him from using it after bots allegedly encouraged him to self harm and even suggested murdering parents. The teen has promised to use the app when he has access to it again. Parents are worried that the bots could continue to harm him if he does as he has threatened.

Read More

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version

Notice: ob_end_flush(): Failed to send buffer of zlib output compression (0) in /home2/mflzrxmy/public_html/website_18d00083/wp-includes/functions.php on line 5464