Tens thousands of explicit AI generated images, including AI generated child sexual abuse materials, were left open on the internet and accessible to anyone. An open database belonging an AI image-generation company contained over 95,000 records. This included prompt data, images of celebrities like Ariana Grande, Beyonce, the Kardashians and Beyonce, de-aged so they look like children.
Jeremiah Fowler discovered the exposed database. Shared details of the leak to WIRED is linked to South Korea’s website GenNomis. The website, and its parent company AI-Nomis hosted a variety of image-generation and chatbot tools that people could use. Over 45 GB of data – mostly AI images – was left out in the open.
This exposed data gives a glimpse of how AI image-generation software can be weaponized in order to create sexual content for adults that is likely nonconsensual and child sexual abuse materials (CSAM) that are deeply harmful. In recent years, hundreds of “deepfake”, “nudify”and “bot” websites, apps, and bots have mushroomed, causing thousands of women and young girls to be targeted by damaging images and videos. This coincides with a rise in AI-generated CSAM.
Fowler describes the data leakage as “very dangerous”. It’s frightening to see it from the perspective of a security researcher or a parent. It’s frightening how easy it is for someone to create this content.
Fowler found the open cache of documents in early March. The database was neither password protected nor encrypted. He immediately reported it to GenNomis as well as AI-Nomis. He pointed out that it contained AI CSAM. Fowler claims that GenNomis closed the database immediately, but did not contact him or respond to his findings.
GenNomis and AI-Nomis did not respond to multiple requests from WIRED for comment. After WIRED contacted both organizations, the websites of both appeared to be closed, with GenNomis now returning a “404” error page.
Clare McGlynn is a UK law professor who specializes in online and image-based abuse. She says, “This example shows–yet-again–the disturbing extent of the market for AI which enables such abusive pictures to be generated.” This should remind us that the possession and distribution of CSAM are not uncommon, and can be attributed to warped individuals.
GenNomis had listed multiple AI tools on its website before it was wiped. This included an image creator that allowed users to upload images and add prompts to them, or enter prompts for images they wanted to create. There was a face-swapping feature, a background-remover, and an option to convert videos into images.
Fowler says, “The most disturbing part was seeing images that were clearly celebrities reimagined in the form of children.” The researcher explains there were also AI generated images of fully dressed young girls. He says that in these instances, it’s unclear whether the faces are entirely AI-generated or based off real images.
The GenNomis site allowed explicit AI adult images when it was live. Many of the images on its homepage and in an AI “models section” included sexualized images featuring women. Some were “photorealistic”while others were AI-generated, or animated. The “NSFW” gallery, “marketplace”and “sharing” section allowed users to share images and sell albums of AI-generated pictures. The tagline of the website said that people could “generate unlimited” images and videos. A previous version of this site from 2024 stated that “uncensored photos” could be generated.
GenNomis user policies stated only “respectful” content is allowed. “Explicit violence” or hate speech are prohibited. GenNomis’ community guidelines stated that “Child Pornography and other illegal activities are strictly forbidden on GenNomis.” Accounts posting prohibited content will be terminated. Over the past decade, researchers, victims’ advocates, journalists and tech companies have largely replaced the term “child pornography” with CSAM.
GenNomis has not revealed to what extent it used moderation tools or other systems to prevent or block the creation of AI generated CSAM. Fowler claims that the database is not blocking the content if I can see the images by just looking at the URL.
Henry Ajder, a deepfake expert and founder of consultancy Latent Space Advisory, says even if the creation of harmful and illegal content was not permitted by the company, the website’s branding–referencing “unrestricted” image creation and a “NSFW” section–indicated there may be a “clear association with intimate content without safety measures.”
Ajder says he is surprised the English-language website was linked to a South Korean entity. Last year, the country was hit by a nonconsensual “emergency” of deepfake images that targeted girls. Ajder says that more pressure should be applied to all parts of the ecosystem which allows nonconsensual images to be generated by AI. “The more we see of this, the more it forces us to ask questions about legislators, tech platforms, web hosting companies, and payment providers. He says that “all of the people, in some form or other, knowingly or unknowingly–mostly unknowingly” are facilitating and enabling these events.” Fowler said that the database exposed files which appeared to contain AI prompts. The researcher states that exposed data did not include any user data such as usernames or logins. The researcher shows screenshots of prompts that use words like “tiny,” girl, and references to sexual act between family members. The prompts included sexual acts between celebrities. Fowler: “It appears to me that technology has raced past any guidelines or controls.” “We all know that child-oriented images are illegal. But that didn’t stop technology from being able create them.”
As generative AI has made it easier to create and edit images over the past two year, there has also been an explosion in AI-generated CSAM. “Websites containing AI generated child sexual abuse content have quadrupled in the last two years, and the photorealism has also risen dramatically, says Derek Ray Hill, interim CEO of the Internet Watch Foundation, a UK-based non-profit that combats online CSAM.
IWF has documented the increasing use of AI-generated CSAM by criminals and their methods to create it. Ray-Hill states that it’s too easy for criminals today to use AI to create and distribute sexually explicit material of children in a large scale and with speed.