Home Uncategorized I’m a Security Expert, and I almost fell in love with a...

I’m a Security Expert, and I almost fell in love with a deepfake North Korea style job applicant…Twice.

0
I’m a Security Expert, and I almost fell in love with a deepfake North Korea style job applicant…Twice.

Dawid Moczadlo interviewed alleged job seekers twice in the past two month, only to discover they were “software developers” scammers who used AI-based tools. They likely hoped to be hired by a security firm that also used artificial intelligence and then steal sensitive IP or source code.

Moczadlo, a security engineer, co-founded Vidoc Security Lab in San Francisco, a vulnerability management company, 2021. Moczadlo spoke to The Register,in

“If they almost fooled me, a cybersecurity expert, they definitely fooled some people,” .

According to Moczadlo the startup is hiring to build a product which, according to Moczadlo uses machine learning to fix vulnerable code that was written by Microsoft Copilot ChatGPT and human developers.

It was an Upside-Down experience when, in December, the job applicant who had made it through all the rounds of interviews was invited to a video conference with Moczadlo. During the call, the co-founder said it became clear that the interviewee used software to alter his appearance in real time.

“We spent and lost more than five hours on him,” Moczadlo said. “And the surprising thing was, he was actually good. I kind of wanted to hire him because his responses were good; he was able to answer all of our questions.”

Some red flags were raised. Vidoc Security Lab posted an ad to hire developers on a Polish site. We’re told that the applicant claimed to be from Poland, had a Polish-sounding name, but had a strong Asian voice on his phone calls with Moczadlo, and his cofounder.

“But I gave him the benefit of the doubt,” Moczadlo said.

I knew instantly as soon as he turned his camera on

But not until the video interview. “We noticed it after the third or fourth step of our interview process,” Moczadlo recalled. “His camera was glitchy, you could see a person, but the person wasn’t moving like a person. We spoke internally about him, and we thought, OK, this person is not real.”

The applicant has been rejected. It happened again two months later.

The second fake IT candidate contacted Moczadlo, his colleagues and Moczadlo via LinkedIn. According to the fake profile of the job candidate, which has been removed, and to his resume that Moczadlo sent to The Register,a person called Bratislav claimed to a software engineer in Serbia who was looking for a remote position. Bratislav, who had a Microsoft-owned social networking site, nine years of work experience, and a computer sciences degree from the University of Kragujevac all seemed legitimate to the Vidoc Security Lab.

“His experience was decent, his surname was Slavic, his CV said he lived in Serbia and had a university degree from Serbia, but also he had a really strong Asian accent,” Moczadlo said.

“All of his responses were from ChatGPT”

Bratislav told Vidoc Security Lab during the first round of interview that his camera was not working. After rescheduling with Moczadlo once, Bratislav agreed to an interview on camera on February 4. “When he joined the meeting, as soon as he turned on his camera, I instantly knew,” Moczadlo said.

The co-founder also noted that the answers given by the job applicant to interview questions sounded like they were straight out of OpenAI ChatGPT. The interviewee always had a delay in their answers, and although they were “spot on,” they were not conversational. Instead, they were spoken in bullet points.

“ChatGPT has this style of answering in bullet points all the time, and he was answering in bullet points as well, like he was reading everything from ChatGPT,” Moczadlo said. Moczadlo recalls that he had interviewed an AI-generated face a second time. “So I thought, OK, this time I will record it, because so many people didn’t believe me before that we got candidates like this.”

Moczadlo posted a video of the job seeker on LinkedIn, with the voice muted. He wrote: “WTF, developer used AI to alter his appearance during a technical interview with me. Yes, this is a real recording, it happened today.”

It appears that the person’s neck doesn’t match his head, and the face image has more glitches than the neck or torso.

Moczadlo asks the interviewee repeatedly to wave his hands in front of his face. This is supposed to detect a AI-generated face, as it disrupts and lags the software while trying to integrate the real hand over a deepfake.

When the interviewee refuses, Moczadlo terminates the call.

IT scam nets Norks $88,000

Moczadlo suspects both fake job candidates are part of a larger scam involving fake IT workers, similar to those used by North Korean technologists that have netted Pyongyang at least $888,000 in six years. It is common for someone in North Korea or a North Korean to pretend to be a legitimate Western technology worker in order to get a remote position.

The fake IT workers use their access in the US to blackmail and exploit their employersthreatening to expose corporate asset if extortion demands are not paid. The Feds have claimed that these illegal gains are contributing to the DPRK’s illegal weapons program.

  • Biz fired a fake North Korean worker, then demanded ransom
  • North Korea’s fake IT worker scam brought in at least $88,000 over six years
  • KnowBe4 Security hired a fake North Korean techie who immediately got to work on evil
  • Lighting, camera, AI! Deepfakes in real-time will be at DEF CON

US law enforcement agencies and cybersecurity agencies have warned companies for years about the growing threat deepfakes pose to corporate IP, bank accounts, and brand reputation.

It’s impossible to tell if the person you’re talking to is real or not.

“Multiple” Infosec researchers have contacted Moczadlo. He said he shared screenshots, videos, and other details to help them attribute the activity to an individual criminal group or nation-state.

“I feel kind of scared about the future,” He said. “Right away, the software used by the person wasn’t very good. I was able spot all the artifacts, and all the glitches.

“But I’m scared that in a year, as AI advances, I won’t be able to decide if the person I’m talking with is a real person or not.” (r)

www.aiobserver.co

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version