Delve Into AI, every Thursday at noon (WAT), will provide nuanced insight on how the continent’s AI trajectory is shaping. This column examines how AI impacts culture, policy, business, and vice-versa. Learn about the people, questions, and projects that are shaping Africa’s AI Future.
On June 27, 2025, Nigerian content creator Asherkine Posted an innocent-looking video asking a young woman out after she revealed that she was single. After the post went viral an anonymous Snapchat user named ‘Kenny,’ started spreading a fake narrative. He claimed that the woman in the video was Ifeme Rebecca Yahoma. She is a University of Nigeria Nsukka Student. He created deepfakes by manipulating photos she had shared online. In the photo, Yahoma has her head slightly slanted and her lips pouted. In the black and white image that ‘Kenny,’ creates, he has doctored his face to look as if he is pecking Yahoma. His face is covered by an emoji. Kennedy gained tens and thousands of followers from the lie. He even advertised Snapchat ads when his virality surged.
The incident is not an anomaly; it’s just the tip of the digital iceberg. Online users in Nigeria and other African nations are using AI tools such as X’s Grok, to manipulate, sexualise or humiliate women. Now, what used to be crude Photoshop work is now photorealistic deepfakes that are created using sexually explicit prompts and AI systems. The results are disturbingly lifelike. If they won’t let you send nudes
Create them
In the past, an X-user had asked Grok to strip Nigerian actress Kehinde Banole. The post, which was widely circulated before being removed, sent shockwaves to her fanbase and concerned X users. If Bankole, a well-known public figure, isn’t protected from harassment by X users, then everyday women are even less protected.
Gbenga (a self-identified consultant in mental health) who made these prompts later posted an apology on X, saying: “I felt a profound sense remorse” for not handling the matter with the care that it warranted. was accused of making antisemitic comments, among other gaffes. These incidents leave their victims with emotional damage, reputational smears and very little legal recourse.
Users on X are prompting the platform’s chatbot, Grok, to undress women or alter their bodies
Are there legal protections that apply?
When asked if new laws are necessary to address how a rapidly evolving technology like AI is enabling abuse in manners previously unseen, legal policy analyst Sam Eleanya says it must be approached with more nuance. “The premise that we need laws to regulate every new advancement in technology is not always sound,” he said. “The better question is: how does this new tool fit into the existing legal order?”
According to Eleanya, using tools like Grok to portray a woman in sexual or compromising ways without her consent could amount to Criminal defamation in sections 373-375 Criminal Code ActSection 408 could be used to charge people who demand money or use intimidation, blackmail, extortion or conspire to do so. The Cybercrimes act of 2015 covers acts such as cyberstalking, image-based abuse and identity theft. All of these could be applicable to these new scenarios. According to Queen-Esther Ifunanya Emma Egbumokei a corporate lawyer specializing in international commercial law, and the creative industry, it is difficult to hold perpetrators accountable in various ways.
The prosecution is often complicated because of jurisdictional ambiguities. It is not always clear where the perpetrator should be tried. Users often operate anonymously, or hide behind pseudonymous profiles, making it difficult to verify identities and initiate legal processes.
Emma-Egbumokei also notes that Nigeria, as many African countries, does not have AI-specific regulations which could clearly define and criminalise misuse of generative tool for harassment or defamation. In the absence such targeted legal frameworks are further weakened and victims are left vulnerable in a grey area.
The hidden cost
Nigerian women are rethinking the use of social media platforms, as they fear that users may alter their photos inappropriately. “Not posting my photos on here before they used AI [to] Remove my hijab,” Write to X user @wluvnana.
Although some users found humor in the post she was expressing a serious concern about a trend she had started to notice.
The only thing that makes sense is to remove myself from such situations. This is absurd. She told TechCabal that she used to be able post pictures to X if she wanted to, but no longer had the power to do so.
Harassment using generative AI is not limited to doctored images and videos. It can also be displayed in text-based formats. This is explained by Chioma Awuegbo. She is the executive director of TechHerNG, a group that supports women through digital literacy.
Agwuegbo explains that there are platforms where you can simulate WhatsApp conversations, Snapchat, Tiktok. “The word generative implies that it is creation of content, or media.”
Combined with a culture of shame surrounding sex and homosexuality, and a growing inability to assess accuracy of online media, it doesn’t take much to do lasting damage. “That’s because the easiest thing to tell a girl about her is that ‘I had sexual relations with her’ or ‘I dumped her.'” Agwuebo explains, and AI tools make it easier to fabricate these narratives.
The problem is not only that generative AI could be misused. A fundamental problem is the lack of appropriate mechanisms to detect misuse, especially in the Nigerian context.
X had a more powerful Trust & Safety Team dedicated to content moderating. According to Australia’s eSafety Commission, the Trust & Safety team headcount has decreased by approximately 30% since the October 2022 acquisition. Currently, there is not a regional office in Africa that is dedicated to local content moderation efforts. The office was opened in 2021 for an African audience but closed shortly after Elon Musk became CEO.
These platforms also rely on AI to handle the heavy lifting in moderation. It’s cheaper and faster to use automated tools to handle moderation rather than hire a large group of human reviewers. These tools can be inaccurate and lack nuance, especially if they are only trained on limited data, such as for local languages or specific regions.
When X-users began tagging Grok with explicit images of Nigerian woman by telling it to turn her around’ or show her backside’, some users tried to fight back and mass report the accounts. They were repeatedly told that the account did not violate community guidelines.
I kept asking: If this doesn’t break your community guidelines, what does? Their platforms are unable to recognize gendered harms which are unique to our Nigerian context. They cannot recognise our memes or our slangs used by X’s ‘banger boy’s’. Jessica Eni, policy associate at TechSocietal and participant in mass reporting, says that there is a huge gap in content moderation.
Other users can do
Although it may be difficult to regulate AI assisted sexual harassment on social media platforms like X there are steps that social media users every day can take. Vivian Nnabue is a social media assistant who believes this.
Nnabue, who saw Grok’s posts encouraging women to undress on X, made a LinkedIn posting accompanied by screenshots and tags for the perpetrators accounts& requesting that they be held responsible.
She did not stop with a LinkedIn posting. She claims she reported the perpetrators’ bosses, and for the one who was still in school, sent reports to the institution. No one responded.
Instead, the perpetrators reached out to her via her DMs. Gbenga, one of the perpetrators, apologized for his actions, adding that he nearly lost a job when a prospective employer found Nnabue’s LinkedIn post while conducting a background check. “My inclination wasn’t nudity. [I]I just want to show off my tech knowledge,” Gbenga wrote in a letter to Nnabue.
Beyond flagging
Even though platforms may address their blind spots advocates like Eni worry about Nigeria’s laws. The country lacks a robust, unified framework to protect people from these new forms of gendered harassment. Existing laws such as the Data Protection Act or the Cybercrimes Act are limited. Agwuegbo and other advocates are also growing concerned that these laws may be misused as a way to suppress free expression or dissent.
Yes, there is a Cybercrime Act. But it is vague and heavy handed. Agwuegbo explains that because it is so ambiguous it can be twisted in any way a person in power wants as long as they are wealthy. The Cybercrime Actis currently being used to defend suits against Senator Natasha Akpoti. She was a female senator that spoke out against sexual harassment from the Senate President. In September 2023 the same Act was used against another woman who had left negative reviews about a can tomato paste made by Nigerian brand Erisco Foods.
What we need is something similar to an Online Safety Act. First, it must define its terms. What do we mean when we say the Internet? What is the scope of digital platforms? One that holds citizens and Big Tech accountable. Agwuegbo continues, “We need clear responsibilities when it comes to things like takedowns and protecting young people.”
Nigerian government is drafting a new Online Harms Protection Bill (), spearheaded by the National Information Technology Development Agency. TechSocietal and other civil society groups are pushing for the bill in order to combat new forms of abuse enabled by technology, especially those amplified through AI tools.
For the time being, many Nigerians are quietly adapting. They are sharing less on social networks as this era AI-enabled harassing continues to unfold. The burden of staying safe on the internet will continue to fall upon those who are most at risk until stronger rules and platform safeguards keep up with the technology.
Please let us know what you think of this column, and any other AI-related topics you would like us to explore. Fill out this formhere!
Mark the dates! Moonshot by TechCabal will be back in Lagos, October 15-16. Join Africa’s leading founders, tech leaders, and creatives for 2 days of keynotes. Early bird tickets are now 20% off — don’t sleep! moonshot.techcabal.com

