Home News Microsoft researchers confident about AI security agent despite it allowing 74% of...

Microsoft researchers confident about AI security agent despite it allowing 74% of malware to slip through

0
Microsoft researchers confident about AI security agent despite it allowing 74% of malware to slip through
Microsoft has developed an AI agent that can detect malware on its own.

Project Irereverse engineers software “without any clues about its origin or purpose,” to determine if it is malicious or benign. It uses large language models and binary analysis and reverse engineering tools. Redmond claimed this in a blog post on Tuesday.

Project Ire, if it performs at scale and as promised, will relieve security analysts from the tedious work of manually classifying every sample as good or bad. This can take hours and lead to alert fatigueand burnout. It also means there are fewer eyes and brains focusing on the most sophisticated and fast-moving attacks that require immediate detection.

That’s a big if for now.

Project Ire flagged nearly nine out of ten files (89%) as malicious in a real-world testing of about 4,000 files “hard-target” meaning they were not classified by automated systems, and would be otherwise manually reviewed by reverse engineers.

The AI agent detected only about a quarter of the malware in the test. Microsoft’s security engineers wrote

“While overall performance was moderate, this combination of accuracy and a low error rate suggests real potential for future deployment,” .

Microsoft’s Defender security suite, which includes antivirus, endpoint security, email security, and cloud security, will integrate the prototype as a binary analyst for threat detection and classification software into its Defender suite. Microsoft says

“Our goal is to scale the system’s speed and accuracy so that it can correctly classify files from any source, even on first encounter,” that the prototype will be integrated into its Defender suite of security tools. AI-based malware analyses are not new. For nearly a decade, vendors such as Cylance have used machine learning to analyze malware files. Gartner’s Neil MacDonald, Gartner’s VP, responded to questions regarding Project Ire via email. Google bursts in, asking, “Did someone say AI agents?”

  • Did anyone say AI agents? Google asks.
  • Vibe coding tools Cursor and MCP’s implementation of MCP allows persistent code execution.
  • “That’s why in this case, Microsoft highlighted its use in the SOC as far as an incident detection and response process rather than inline as a preventative control,” he added. MacDonald noted the

    “relatively high percentage of false positives and false negatives documented in the paper show the limitations of this approach.”

    However, this does not mean that security companies should not invest in AI. MacDonald stated. “AI, in the hands of the defenders, will be necessary to offset the threat of AI in the hands of the attackers.”

    All aboard

    Microsoft’s announcement is made as all the major security companies are doubling down on AI and AI agents, both in terms of integrating them into enterprise tools and helping companies protect data and people from the myriad threats AI systems and agents present. Google is developing its own army AI agents, including one that analyzes malicious code to determine the threat level. The Chocolate Factory announced the malware analysis agent during its annual Cloud Next event. At the time, it was said that it would be available as a preview to select Google customers in this year. Palo Alto Networks announced a deal to buy Israeli company CyberArk late last month. The smaller firm’s identity-security tech, which verifies not only human identities, but also machines and artificial intelligences, will be integrated into the larger security platform. CyberArk reports that machine identities outnumber human ones by 40 to 1. This number is expected grow as more companies adopt AI agents. (r)

    www.aiobserver.co

    Exit mobile version