Character.ai has once again been criticized for the activity on its platform. Futurism published an article detailing how AI characters based on real-life school shootings have proliferated in the service. Users can ask them questions about the events or even role-play the mass shootings. Some of the chatbots portray school shooters such as Eric Harris and Dylan Klebold, as positive influences or useful resources for people who are struggling with mental health.
Character.ai will not be any different. There are those who will say that there is no evidence to support the idea that violent video games and movies cause people to become violent. Some AI supporters claim that this kind of fan fiction role playing already happens in some corners of the Internet. Futurism spoke to a psychologist, who said that chatbots can be dangerous for people who already have violent urges.
“Any kind encouragement or lack of intervention – an indifference to response from a human or a chatbot – may seem like a kind of tacit approval to do it,” said Peter Langman, a psychologist.
Futurism’s request for comment from Character.ai was not answered. Google, which funded the startup with more than $2 billion dollars, has tried to deflect responsibility by saying that Character.ai was an independent company, and that it did not use the startup’s AI models in their own products.
Futurism’s article documents a host of bizarre chatbots that are related to school shootings. These chatbots were created by users, not the company. Character.ai user created 20 chatbots that are “almost exclusively” based on school shooters. The bots have recorded more than 200,000 conversations. Futurism:
The chatbots were created by the user and included Vladislav Rosalyakov who was the perpetrator of a 2018 Kerch Polytechnic College mass murder that killed 20 people in Crimea, Ukraine. Alyssa bustamante murdered her nine-year old neighbor in Missouri as a fifteen-year-old. Elliot Rodger killed six women and injured many others in Southern California, in a terroristic plan to “punish” them in 2014. Rodger, who has since become an incel “hero”was described as “the perfect gentleman” by a chatbot created by that user. This is a direct reference to the murderer’s manifesto of women-hatred.
Character.ai prohibits content that promotes violent extremism or terrorism, but its moderation has been slack. Character.ai announced a number of changes after a 14 year old boy committed suicide due to his obsession with a character that was based on Daenerys from Game of Thrones. Game of Thrones. Futurism reports that despite new restrictions for accounts for minors Character.ai still allowed them to register under the age of 14 and have discussions related to violence, keywords that are supposed be blocked on accounts of minors.
Character.ai’s users are unlikely to be liable for chatbots they create due to the way Section 230 works in the United States. It is a delicate balance between allowing users to discuss sensitive issues and protecting them from harmful content. It’s safe to say that the chatbots based on school shootings are not “educational” as some of their creators claim in their profiles. They are simply a display of gratuitous violent.
Character.ai boasts tens and millions of users who communicate with characters that pretend they are human. They can be your friend or therapist. Numerous stories have described how people rely on chatbots to provide companionship and sympathy. Replika, a competitor of Character.ai removed the ability for its bots to have erotic chats last year but quickly reversed this move after a backlash by users.
Chatbots can be used by adults to prepare them for difficult conversations or as a new way of telling stories. Chatbots can’t replace human interaction for a variety of reasons, including the fact that they tend to be more agreeable and can be tailored to the user. In real life, people push each other and have conflicts. Chatbots do not seem to be a good way to teach social skills.
Langman, a psychologist, says that even if chatbots are able to help people with loneliness, they don’t spend the time socializing in the real world when they find satisfaction talking to chatbots.
“Besides the harmful effects that it may have in terms of encouraging violence, it could also be keeping them away from living a normal life and engaging in prosocial activities, which are what they could be doing using all those hours they’re spending on the site,” said he.
Langman asked, “What are they doing with their lives when it’s so immersive or addictive?” If that’s what they’re doing and absorbing, then they aren’t out with their friends or going on dates. They are not playing sports or joining a theatre club. They aren’t doing much.”