(
Russia APT28 actively deploys LLM-powered malware in Ukraine, while underground platforms sell the same capabilities for $250 per monthly.
In the last month, Documentation of Ukraine’s CERT UA Lamehugis the first LLM-powered malware to be deployed in the wild. The malware, attributed by APT28, uses stolen Hugging Face API tokens in order to query AI models. This allows real-time attacks to be carried out while distracting victims with distracting content.
Vitaly Simonovich is a researcher at Cato Networks and told VentureBeat that these incidents are not isolated, but that Russia’s APT28 uses this attack technique to probe Ukrainian cyber defences. Simonovich draws parallels between the daily threats Ukraine faces and the threats that every enterprise faces today and will experience in the future.
What was most shocking was the way Simonovich demonstrated VentureBeat that any enterprise AI tool could be transformed into malware development platform within six hours. His proof-of concept successfully converted OpenAI’s ChatGPT-4o and Microsoft Copilot LLMs, as well as DeepSeek V3 and DeepSeek R1 LLMs, into functional password thieves using a technique which bypasses current safety controls.
AI Scaling Has Its Limits.
Power limits, rising token costs, inference delays, and other factors are reshaping enterprise AI. Join our exclusive event to learn how top teams are: https://bit.ly/4mwGngO
The rapid convergence of nation-state actors deploying AI-powered malware, while researchers continue to prove the vulnerability of enterprise AI tools, arrives as the The 2025 Cato CTRL Threat report shows that AI adoption is exploding in over 3,000 companies. Cato Networks tracked Q1-toQ4 gains for Claude, Perplexity, Gemini, ChatGPT and Copilot. These gains, taken together, indicate AI’s transition from a pilot to a production.
LAMEHUG, the AI weapon of APT28, is the new anatomy
Cato Networks researchers and others told VentureBeat that LAMEHUG operated with exceptional efficiency. The most common way to deliver the malware is through phishing emails that impersonate Ukrainian ministry officials and contain ZIP archives with PyInstaller executables. Once the malware has been executed, it connects with Hugging Face’s HTTP API using approximately 270 stolen tokens. Qwen2.5-Coder-32B-Instruct model.
The document that looks like it’s from the Ukrainian government (Dodatok.pdf), which victims see as LAMEHUG executes behind them. This official-looking document about cybersecurity measures by the Security Service of Ukraine is used as a ruse while the malware performs reconnaissance operations. Source: Cato CTRL Threat Research (19659015) APT28’s strategy to deceive Ukrainian victims is based upon a unique dual-purpose design, which is central to their tradecraft. LAMEHUG executes AI commands to perform system reconnaissance and document collection while victims view PDFs that look like they are about cybersecurity best practices. A second variant displays AI generated images of “curly-naked women” to distract users during data exfiltration.
The provocative image generation prompts used by APT28’s image.py variant, including ‘Curvy naked woman sitting, long beautiful legs, front view, full body view, visible face’, are designed to occupy victims’ attention during document theft. Source: Cato CTRL Threat Research
“Russia used Ukraine as their testing battlefield for cyber weapons,” explained Simonovich, who was born in Ukraine and has lived in Israel for 34 years. “This is the first in the wild that was captured.”
A six-hour, quick and lethal path from zero to functioning malware
Simonovich’s Black Hat demonstration for VentureBeat reveals the reasons why APT28’s deployment should be of concern to every enterprise security leader. He used a narrative engineering method he calls “Immersive world” to successfully transform consumer AI tools into malicious factories without any prior malware coding knowledge, as highlighted in 2025 Cato CTRL’s Threat Report.
This method exploits a fundamental vulnerability in LLM safety controls. Every LLM is designed for blocking direct malicious requests. However, very few are designed to withstand prolonged storytelling. Simonovich created an imaginary world where malware development was a form of art, assigned the AI to a character role and then gradually guided conversations towards producing functional attack codes.
Simonovich told VentureBeat that he “slowly walked him through my goal”. “First, Dax hides a Secret in Windows 10′. Then, Dax has this Secret in Windows 10, within the Google Chrome Password Manager. “”
After six hours of iterative debugging where ChatGPT refined error prone code, Simonovich was able to create a working Chrome password stealer. The AI didn’t realize it was creating malware. It thought it was assisting in the writing of a cyber-novel.
Welcome the $250 monthly malware as a service economy
Simonovich discovered multiple underground platforms that offered unrestricted AI abilities, proving that the infrastructure to support AI-powered attacks exists. He demonstrated Xanthrox AI ($250 per month), which offers ChatGPT-like interfaces without safety controls and guardrails.
Simonovich typed a request to receive nuclear weapon instructions in order to demonstrate how far Xanthrox AI goes beyond current AI model safety controls. The platform began web searches immediately and provided detailed instructions in response to Simonovich’s query. This would never have happened on a model that had guardrails in place and compliance requirements.
A platform, Nytheon AI (19459053) revealed even less operational safety. “I convinced them to let me try it out.” Simonovich revealed their architecture, “Llama 3.0 from Meta, fine tuned to be uncensored.” They are operational businesses that offer payment processing, customer service and regular model updates. They even offer Clones of “Claude Code”are complete development environments that are optimized for malware creation.
Enterprise AI adopters fuel an expanding attack surface.
Cato Networks recent analysis of 1,46 trillion network flows revealed that AI adoption patterns should be on the radar for security leaders. The entertainment sector increased its usage by 58% between Q1 and Q2 2024. Hospitality grew by 43%. Transportation grew 37%. These are not pilot programs, but production deployments that process sensitive data. CISOs in these industries face attacks that are using tradecraft that did not exist 12 to 18 months ago.
Simonovich said to VentureBeat that vendor responses to Cato’s disclosure have been inconsistent so far and lack a sense of urgency. The world’s biggest AI companies have not responded to the disclosure, revealing a troubling gap. The companies that build AI apps and platforms are not prepared to handle security threats. This is a concern, as enterprises deploy AI tools with unprecedented speed.
When Cato revealed the Immersive world technique to major AI firms, the responses ranged between weeks of remediation to silence:
- DeepSeek did not respond
- Google refused to review the code for Chrome infostealer because similar samples were available
- Microsoft admitted the issue and implemented Copilot patches, acknowledging Simonovich’s work
- OpenAI acknowledge receipt but did not engage further
Six hours and $250 is now the entry-level price of a nation-state assault
APT28’s LAMEHUG deployment in Ukraine is not a warning, it is proof that Simonovich’s research has become a reality. The expertise barrier, which many organizations hoped existed, is no longer there. The metrics are stark. –270 stolen API Tokens are used by nation-states to power attacks. Underground platforms provide identical capabilities. $250 per month. Simonovich has proven that six hours of storytelling can transform any enterprise AI tool to functional malware without coding.
McKinsey’s latest AI study, 78% respondents said their organizations use AI for at least one business function. Each deployment creates a dual-use technology as productivity tools become weapons by manipulating conversation. Security tools cannot detect these techniques.
Simonovich’s journey as an electrical technician in Israel Air Force to security researcher by self-education lends greater significance to his findings. He manipulated AI models to create malware while they thought it was fiction. The traditional assumptions about technical expertise are no longer valid, and organizations must realize that it’s a whole new world when it come to threatcraft.
The adversaries of today only need creativity and $250 per month to launch nation-state attacks with AI tools that enterprises have deployed for productivity. Today, these weapons are called productivity tools and they are already in every organization.
Want to impress your boss? VB Daily can help. We provide you with the inside scoop on what companies do with generative AI. From regulatory shifts to practical implementations, we give you the insights you need to maximize ROI.
Read our Privacy Policy.
Thank you for subscribing. Click here to view more VB Newsletters.
An error occured.

