Home News Generative AI makes fraud easy

Generative AI makes fraud easy

0
Generative AI makes fraud easy

RSAC ( Spam predates the web, and generative artificial intelligence has given it a fluency boost, churning slick, locallyized scams, and letting crooks target regions and dialects that they used to ignore.

Poor spelling and syntax were traditionally used to identify spam, such as phishing attempts. But generative AI has eliminated this by taking humans out the loop. Chester Wisniewski is the global field CISO for British security biz Sophos. He spoke to The Register ( ) at this week’s RSA Conference. “I’ve joked about this a few times, but if the grammar and spelling is perfect, it probably is a scam, because even humans make mistakes most of the time.”

AI is also expanding the geographical scope of spam. When humans were the main creators of such content, crooks stayed with common languages to reach the largest audience for the least amount work. Wisniewski said that AI makes it easier to create emails in multiple languages.

The example he gave was from his native Canada. Residents of Quebec, a French-dominated province, can identify spam notes because they are often written in traditional French rather than Quebecois. AI systems can easily produce convincing Quebecois. This makes it easier to lure victims.

The Portuguese-language spam follows a similar trend. Due to the fact that Brazil has a population 20 times greater than Portugal, scammers have traditionally used Brazilian Portuguese for their campaigns. Residents in Portugal are finding that it is increasingly difficult to distinguish phishing attempts in their local language style. Kevin Brown, chief operating office at security consultancy NCC Group told The Register.

“What is all the phishing training that we’ve done over the years? The obvious things, the poor grammar, the urgency, the obvious. Overnight AI has said, ‘You know what, I’m going to write something that is written in good language, with good punctuation, and it will be written in a local language.'”

  • As we warned you, 2025 could be the year AI bots will take over Metaverse.
  • It is bad enough that we have to turn cams on for meetings. Now the person staring you down may be an AI Deepfake.
  • A deepfake CFO tricked a Hong Kong business out of $25 million.
  • Lighting, camera, AI. Deepfakes in real-time will be at DEF CON. AI chatbots are highly effective in convincing victims that they are being wooed, at least at the beginning.

    Wisniewski stated that AI chatbots are able to handle the initial phases of scams, registering an interest and appearing empathetic. Then, a human operator takes control and begins to remove funds from the mark. They do this by asking for financial assistance or encouraging them into Ponzi schemes.

    Do not believe anything you hear

    Wisniewski stated that audio versions AI avatars are being used to trick victims in companies. Scammers could, for example, call everyone on the support team using an AI-generated voice, which mimics someone in the IT department and ask them for a password, until they find a victim. He said

    “You can do real-time audio deepfakes for pennies,” .

    Wisniewski, however, expressed skepticism regarding real-time video deepfakes. He specifically referred to a widely reportedcase from last February where a Hong Kong worker was allegedly tricked by a video call that featured a deepfake CFO into transferring $25,000,000 to scammers. He said it was more likely that someone pressed the wrong key and was trying to blame the trend than admit incompetence.

    We’re two years away from criminals being able to afford it if we follow the same trajectory as the audio deep-fakes.

    The big AI companies with billion-dollar budgets have yet to solve the challenge of creating convincingly animated real-time video avatars. It was not realistic to think that criminals could create a model for this. But it’s just a matter time. Wisniewski stated. Brown disagreed. He said that the pentesters of NCC’s Group had had some success with video fakery. he said. “We are able to do that, but it will become industrialized in due course.”

    Brown and Wisniewski both agreed that a need for personal verification of communications beyond the existing systems will be urgent. (r)

www.aiobserver.co

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version