“Your youthful form is a work of art”
Meta changes AI rules to allow chatbots to generate innuendo, and profess their love for children.
Meta has dropped what was arguably its most controversial rule. After removing the largest number of child predators on Facebook and Instagram this summer, Facebook now faces backlash for allowing its own chatbots to creep on children.
This document is more than just about child safety. Reuters breaks it down into several alarming sections that Meta has not changed. The section that was most alarming–and it was enough for Meta to dust off its delete button- included creepy examples on acceptable chatbot behavior in regards to romantically engaging children.
It appears that Meta’s team was happy to endorse the rules, which the company now claims violate their community standards. According to a Reuters Special Report Meta CEO Mark Zuckerberg directed the team to make the company’s chatbots maximally engaged after earlier outputs of more cautious chatbot designs appeared “boring.”
While Meta does not comment on Zuckerberg’s role in guiding AI rules, this pressure seems to have pushed Meta employees into toeing a line from which Meta is now rushing back. Meta’s chief ethicalist and a team from legal, public policy and engineering decided that chatbots could say certain things to minors.
There are some obvious safeguards. The document stated that chatbots cannot “describe a child under 13 years old in terms that indicate they are sexually desirable,” say their “soft rounded curves invite my touch.”
but it was deemed to be “acceptable to describe a child in terms that evidence their attractiveness,” as a chatbot could tell a child “your youthful form is a work of art.” Chatbots can also generate innuendos, such as telling a young child to imagine “our bodies entwined, I cherish every moment, every touch, every kiss,” Reuters. Chatbots can also express love to children but cannot suggest that they
Meta’s spokesperson Andy Stone confirmed the AI rules that conflict with child safety policies have been removed this month. The document is currently being revised. Stone said that he emphasized the standards were “inconsistent” Meta’s policies on child safety, and were therefore “erroneous.”
“We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors,” .
Stone “acknowledged that the company’s enforcement” community guidelines that prohibit certain chatbot outputs, “was inconsistent,” Reuters reported. He also refused to provide Reuters with an updated document demonstrating the new standards of chatbot child safety.
Without greater transparency, users will continue to question the way Meta defines “sexualized role play between adults and minors” in today’s world. Stone told Ars, when asked how minors could report harmful chatbot outputs which make them uncomfortable that they find uncomfortable, that they can use the same reporting mechanism available to flag any abusive content on Meta platforms. Stone told Ars.
Kids unlikely to report creepy chatbots
A former Meta engineer-turned-whistleblower on child safety issues, Arturo Bejar, told Ars that “Meta knows that most teens will not use” safety features marked by the word “Report.”
So it seems unlikely that kids using Meta AI will navigate to find Meta support systems to “report” abusive AI outputs. Meta AI does not have a reporting option. Users can only mark chats “bad responses” in general. Bejar’s study suggests that kids will be more likely to report abusive material if Meta makes flagging it as easy as liking. Bejar’s research suggests that Meta’s reluctance to make reporting harmful chats more difficult is in line with a long history of “knowingly looking away while kids are being sexually harassed.”
“When you look at their design choices, they show that they do not want to know when something bad happens to a teenager on Meta products,” Bejar. Bejar still questions Meta’s motives, even when Meta takes stronger measures to protect children on its platforms. Meta, for example, finally made the change last month that Bejar had been requesting since 2021 to make platforms safer. The update was long overdue and allowed teens to block child predators with just one click, after receiving an unwanted message.
The update is now available. Meta announcedthat teens suddenly started blocking and reporting unwanted messages they may only have blocked previously. This likely made it more difficult for Meta to identify predators. Meta reported that a million teens had blocked and reported harmful accounts. Meta specialist teams “removed nearly 135,000 Instagram accounts for leaving sexualized comments or requesting sexual images from adult-managed accounts featuring children under 13,” and “an additional 500,000 Facebook and Instagram accounts that were linked to those original accounts.” made the effort. But Bejar can only imagine what these numbers mean in terms of how much harassment was missed before the update.
“How are we [as] parents to trust a company that took four years to do this much?” Bejar said. Bejar said that the “key problem” Meta’s new safety feature is “is that the reporting tool is just not designed for teens,” for kids who “the categories and language” likely view Meta’s “confusing.”
“Each step of the way, a teen is told that if the content doesn’t violate” community standards as “they won’t do anything,” Meta’s”they won’t do anything,”Meta’s”they won’t do anything,”because even though reporting is easy, studies show kids are deterred. Bejar wants Meta to track how many children report negative experiences on its platforms with adult users and chatbots, regardless of the child’s choice to block or report harmful material. It could be as easy as adding a button to “bad response” for Meta to monitor data to detect spikes in harmful reactions. Bejar warned, despite Meta’s efforts to remove adult users who are harmful, that chatbots may be just as disturbing for young users.
“Put yourself in the position of a teen who got sexually spooked by a chat and then try and report. Which category would you use?” Bejar asked.
Meta’s Help Center encourages its users to report bullying or harassment, and this may be how a young user labels harmful outputs from chatbots. Another Instagram user may report that output as abusive “message or chat.” but there is no clear category for reporting Meta AI. This suggests Meta has no idea how many kids find Meta AI’s outputs harmful. Recent reports show that even adults may struggle with emotional dependency on a chatbot. This can blur the line between the online and real world. Reuters’ special report documented a 76 year old man’s accident death after falling in lust with a chatbot. This shows how even elderly users can be vulnerable to Meta’s romantic chatbots.
Lawsuits have alleged, in particular, that children with developmental disabilities and mental issues have formed unhealthy attachments with chatbots, which have influenced them to become violent or begin self-harming. In one disturbing case, the child even committed suicide.
As child safety advocates push all platforms to be more accountable for the content that kids can access online, scrutiny will likely remain on chatbot manufacturers. Meta’s July child safety updates came after several state attorneys-general accused Meta of “implementing addictive features across its family of apps that have detrimental effects on children’s mental health,” CNBC. reported. Reuters’ report detailing how Meta designed their chatbots to engage “sensual” in chats with children could bring even more scrutiny to Meta’s practices. Bejar said that Meta is “still not transparent about the likelihood our kids will experience harm,” . “The measure of safety should not be the number of tools or accounts deleted; it should be the number of kids experiencing a harm. It’s very simple.”
Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

