Home Technology A Pro-Russian Disinformation Campaign is Using Free AI Tool to Fuel a...

A Pro-Russian Disinformation Campaign is Using Free AI Tool to Fuel a “Content Explosion”

0
A Pro-Russian Disinformation Campaign is Using Free AI Tool to Fuel a “Content Explosion”
According to reports,

a pro-Russian disinformation is using consumer artificial intelligence tools to fuel “content explosions” aimed at exacerbating tensions surrounding global elections, Ukraine and immigration among other controversial topics. New research was published last week.

This campaign is known by many names, including Matryoshka and Operation Overload (19459061) (other researchers have also linked it to Storm-1679 (19459061) has been operating in Russia since 2023. Microsoft Institute for Strategic Dialogue () The campaign spreads false narratives by impersonating mainstream media outlets in an apparent attempt to sow division among democratic countries. The campaign is aimed at audiences all over the world. Its main target, including in the US has been Ukraine. The campaign has produced hundreds of AI-manipulated videos that have been used to promote pro-Russian narratives.

This report shows how between September 2024 to May 2025 the amount of content produced by those who run the campaign has dramatically increased and is being viewed by millions of people around the globe.

The researchers found 230 unique pieces in their report that were promoted by the campaign from July 2023 to June 2024. These included pictures, videos and QR codes. Researchers said that over the last eight-month period, Operation Overload produced a total 587 unique pieces, the majority of which were created using AI tools.

According to the researchers, the surge in content was fueled by consumer-grade AI software that is available online for free. This easy access helped fuel a campaign tactic called “content amalgamation,” in which those running the operation could produce multiple pieces pushing the same story using AI tools. Researchers from Reset Tech (a London-based non-profit that tracks disinformation campaigns) and Check First (a Finnish software company) wrote in the report: “This marks an increase in multilingual, more sophisticated propaganda tactics.” The campaign has significantly increased the production of new material in the last eight months, signaling a shift towards faster, more scalable methods of content creation.

Researchers were also shocked by the variety and types of content that the campaign was pursuing. Aleksandra Athanasova, a lead open-source intelligence research at Reset Tech tells WIRED that she was surprised by the diversity of content and the different types of material they began using. It’s as if they have diversified the palette to capture as many different angles of these stories. They’re layering different types of content one after the other.

Atanasova said that they did not appear to use any custom AI tools, but instead used AI-powered image and voice generators that were accessible to everyone. While it was difficult for the researchers to identify the tools that the campaign operatives used, they were able narrow down one tool: Flux AI.

Flux AI, a text-to image generator developed by Black Forest Labs – a German company founded by former Stability AI employees – is a text to image generator. The researchers used the SightEngine tool to analyze images and found that 99 percent of the fake photos shared by the Overload Campaign–some of which claimed Muslim migrants were rioting in Berlin and Paris and setting fires–were generated using Flux AI.

Black Forest Labs’ spokesperson wrote to WIRED that they built in multiple layers of safeguards in order to prevent illegal misuse. This includes provenance metadata, which allows platforms to identify AI-generated content. We also support partners with the implementation of additional moderation and provenance features. “Preventing abuse will depend on layers and collaboration between developers, social platforms, and authorities. We remain committed to supporting this effort.”

Atansova told WIRED that the images she and colleagues reviewed did contain metadata.

Operation Overload uses AI to manipulate videos in order to make it seem as if famous figures have said things they never said. The number of videos produced for the campaign increased from 150 between June 20,23 and July 20,24 to 367 in September 2024 through May 2025. Researchers said that the majority of videos produced in the last eight month used AI technology to fool those who watched them.

For example, in one instance the campaign published a February video on X featuring Isabelle Bourdon a senior lecturer at France’s University of Montpellier. Bourdon appeared to encourage German citizens to participate in mass riots, and vote for the far right Alternative for Germany (AfD), in federal elections. This was fake footage: It was taken from the official YouTube channel of the school, where Bourdon discussed a recent social sciences prize she won. In the manipulated video, AI voice cloning made it appear as if Bourdon was talking about the German elections.

AI-generated content created by Operation Overload, is shared by bot accounts and over 600 Telegram channels as well as social media platforms such as X and Bluesky. In recent weeks, content was shared on TikTok. The first time this was spotted was in May. Although the number of accounts were small (just 13), the videos posted had been viewed 3 million times by the platform before it demoted the account.

Anna Sopel, TikTok’s spokesperson, told WIRED that the platform is “highly vigilant” against actors who attempt to manipulate its platform. The accounts mentioned in this report have been removed. “We detect, disrupt, and work to stay in front of covert influence on a regular basis and report our monthly progress transparently.”

While Bluesky suspended 65 percent fake accounts, X took minimal action despite reports about the operation and growing evidence of coordination.

After Operation Overload creates fake and AI-generated content, they do something unusual. They send emails to hundreds media and fact-checking organisations across the globe with examples of the fake content on different platforms and requests for fact-checkers.

It may seem counterintuitive that a disinformation campaign would alert those who are trying to combat disinformation about their efforts. However, the ultimate goal for pro-Russian operatives is to get their content posted on the internet by a legitimate news outlet, even if the content is marked with the word “FAKE”.

The researchers claim that up to 170,000 of these emails have been sent to over 240 recipients since September 20,24. Researchers said that the emails were not generated by AI. They usually contained multiple links leading to the AI-generated material.

ProRussian disinformation groups have been using AI tools for years to boost their output. Last year, a group called CopyCop, which was likely connected to the Russian government and used large language models (or LLMs) to create fake websites that looked like legitimate media outlets. These attempts are not usually successful, but the social media promotion and the fake information that is posted can be enough to attract attention. In some cases, the fake information may even appear at the top of Google’s search results.

According to a recent report by the American Sunlight Project,Russian disinformation networks produce at least 3,000,000 AI-generated articles every year. This content is poisoning AI-powered chatbots such as OpenAI’s ChatGPT or Google’s Gemini. Researchershave repeatedly demonstrated that disinformation operatives use AI tools. As it becomes more difficult for people

to distinguish between real and AI-generated content experts predict a continued surge in AI content fuelling misinformation campaigns. Atanasova:

They already have a recipe that works. “They know exactly what they are doing.”

www.aiobserver.co

Exit mobile version