d Claude AI and other system could be vulnerable to worrying Command Prompt Injection Attacks - aiobserver.co
Home AI Companies News Anthropic Claude AI and other system could be vulnerable to worrying Command Prompt...

Claude AI and other system could be vulnerable to worrying Command Prompt Injection Attacks

0
Claude AI and other system could be vulnerable to worrying Command Prompt Injection Attacks
(Image credit: Mark Pickavance)
GenAI can also be tricked into writing, compiling, and running malware.

Anthropic released Claude Computer Use in mid-October 2024. It is an Artificial Intelligence model that allows Claude to control a computer. Researchers have already found a method to abuse it.

Cybersecurity research Johann Rehnberger described how he was recently able to abuse Computer Use to get it to download and run malicious software, as well as to communicate with the C2 infrastructure.

Although it sounds terrible, there are a couple of things worth mentioning: Claude Computer Use, which is still in beta stage, has a disclaimer that Computer Use may not behave as intended.

“Countless ways” To abuse AI

Rehnberger refers to his exploit as ZombAIs and says that he was able get the tool to download Sliver. Sliver is a legitimate open-source command-and-control framework developed by BishopFox to be used for red teams and penetration testing. However, it is often misappropriated by cybercriminals and used as malware.

Threat agents use Sliver in a similar manner to other C2 Frameworks such as Cobalt Strike to establish persistent access, execute commands, manage attacks, and more. Rehnberger also stressed this is not the only method to abuse generative AI and compromise endpoints through prompt injection.

He said that there are countless other ways to abuse generative AI tools, and compromise endpoints via prompt injection. “Claude can also write the malware from zero and compile it,” he added. “Yes, the malware can write C code and compile it. It can also run it.”

Sign up for the TechRadar Pro Newsletter to get the latest news, opinions, features, and guidance that your business needs to be successful!

There are many other options.

The article states, The Hacker News ( ) added DeepSeek AI Chatbot was also vulnerable to a rapid injection attack, which could allow threat actors take over victim computers. Moreover, Large Language Models can output ANSI code which can be used in an attack dubbed Terminal DiLLMa to hijack system terminals by prompt injection.

The best endpoint protection tools are available right now.

Sead, a freelance journalist with years of experience, is based in Sarajevo. He writes about IT, cybersecurity, and 5G (ransomware and data breaches). He has written for many media outlets in his career spanning over a decade. This includes Al Jazeera Balkans. He has also taught several modules on writing content for Represent Communications.

Most Popular (

)
www.aiobserver.co

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version