Home News It’s just a matter time before LLMs start supply-chain attack

It’s just a matter time before LLMs start supply-chain attack

0
It’s just a matter time before LLMs start supply-chain attack

Interview Since criminals realize that it is much easier and cheaper to steal credentials, and then jailbreak them existing ones, the threat of a supply chain attack on a large scale using generative AI has become more real.

We’re not talking a fully AI generated attack from the initial access through to the shutdown of business operations. Criminals aren’t technologically advanced enough yet. LLMs have become very adept at assisting with social engineering campaigns.

Crystal Morin, former Intelligence Analyst for the US Air Force, and Cybersecurity Strategist at Sysdig, predicts that supply chain attacks will be highly successful in 2025, originating from spear phishes generated by LLMs.

Morin, former intelligence analyst for the US Air Force and cybersecurity strategist at Sysdig, told The Register thatLLMs are a great way to use spear phish. “We’re in a footrace right now. It’s machine against machine.”

Sysdig and other researchers in 2024 documented a rise in criminals who used stolen cloud credentials to gain access to LLMs. Container security firm, in May, documented attackers targeting Anthropic Claude LLM.

Although they could have used this access to extract LLM data, it appeared that their primary goal was to sell access to other criminals. The cloud account owner was left to pay the price – $46,000 per day for LLM consumption.

The researchers found that the script used to launch the attack was able to check credentials for ten different AI services, including AI21 Labs (formerly Anthropic), AWS Bedrock (formerly Azure), ElevenLabs (formerly MakerSuite), Mistral, OpenAI and OpenRouter.

We’re in a footrace right now. It’s machine versus machine

Sysdig detected attackers later in the year trying to use stolen credentials to activate LLMs.

According to the threat research team, any attempt to illegally access a model is “LLMjacking,” ; in September, they reported that these attacks were “on the rise, with a 10x increase in LLM requests during the month of July and 2x the amount of unique IP addresses engaging in these attacks over the first half of 2024.”

This can cost victims a lot of money. Sysdig reports that this can be more than $100,000 a day when an organization uses newer models such as Claude 3 Opus.

In addition, victims must pay for technology and people to stop these attacks. There’s a risk that enterprise LLMs could be weaponized, resulting in additional costs.

Is 2025 the year of LLM phishing? “the greatest concern is with spear phishing and social engineering,” Morin stated that 2025 will be the year of LLM phishing. “There’s endless ways to get access to an LLM, and they can use this GenAI to craft unique, tailored messages to the individuals that they’re targeting based on who your employer is, your shopping preferences, the bank that you use, the region that you live in, restaurants and things like that in the area.”

This can help attackers overcome language barrier, and make messages sent via social media messaging apps or email appear even more convincing since they are specifically tailored for the individual victim. Morin continued. “So that will enable their success quite a bit. That’s how a lot of successful breaches happen. It’s just the person-on-person initial access.”

She cited the Change Healthcare ransomware as an example of a 2024 breach that could be extremely damaging.

A ransomware team locked up Change Healthcare’s systems in this case. They disrupted thousands of pharmacies, hospitals, and other institutions across the US, and accessed private data belonging around 100 million individuals. After the attack, it took nine months for the healthcare payments giant to restore its clearinghouse service.

This will be a small, simple part of the attack chain that could have a massive impact

“Going back to spear phishing: imagine an employee of Change Healthcare receiving an email and clicking on a link,” Morin stated. “Now the attacker has access to their credentials, or access to that environment, and the attacker can get in and move laterally.”

If and when we see this type GenAI assistance, “it will be a very small, simple portion of the attack chain with potentially massive impact,” added she.

While existing companies and startups are releasing security software that uses AI to detect and stop email phishing, there are some simple steps that anyone can take to avoid falling victim to any type of phishing. “Just be careful what you click,” Morin warned.

Before you click, stop and think

Pay attention to who sent the email. “It doesn’t matter how good the body of the email might be. Did you look at the email address and it’s some crazy string of characters or some weird address like name@gmail but it says it’s coming from Verizon? That doesn’t make sense,” She added.

LLMs are also able to help criminals create a domain that has different alphanumerics, based on well-known, legitimate company names. They can also use various prompts in order to make the sender appear more credible. Morin believes that AI will make it harder to distinguish voice-call phishing. Microsoft offers $10K to hackers who hijack LLM email service.

“But they use voice cloning now, too,” Morin continued. “And most of the time when people answer your phone, especially if you’re driving or something, you’re not actively listening, or you’re multitasking, and you might not catch that this is a voice clone – especially if it sounds like someone that’s familiar, or what they’re saying is believable, and they really do sound like they’re from your bank.”

This was a preview during the run up to the US presidential election in 2024, when AI-generated automated calls impersonating President Biden encouraged voters not to vote in the state’s primary election.

Since that time, the FTC has offered a $25,000 prize to anyone who can come up with the best way to combat AI voice-cloning. The FCC has also declared AI-generated automated calls illegal.

Morin does not expect this to deter criminals. She said

“If there’s a will, there’s a way,” . “If it costs money, then they’ll figure out a way to get it for free.” (r)

Read More

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version

Notice: ob_end_flush(): Failed to send buffer of zlib output compression (0) in /home2/mflzrxmy/public_html/website_18d00083/wp-includes/functions.php on line 5464