Home News Infosec experts divided on AI’s potential to assist red teams

Infosec experts divided on AI’s potential to assist red teams

0
Infosec experts divided on AI’s potential to assist red teams

Infosec experts disagree on AI’s ability to assist red teams (19459000)

CANALYS Forums APAC Infosec experts are divided over whether generative AI is really helpful for red teams raiders who test enterprise system. Infosec professionals simulate attacks in order to identify vulnerabilities. It’s a common tactic that’s been adapted to test generative AI applications. Developers bombard them with a large number of prompts, hoping some will produce problematic results.

The red teams use AI in addition to testing it. IBM’s Red Team told The Register that it used AI in May to analyze data at a major tech company’s IT estate and found a vulnerability in an HR portal. Big Blue’s Red Team said that AI had shortened the time it took to find and target this flaw. Panel prognostications.

The recent Canalys APAC Forum held in Indonesia convened an expert panel to discuss the use of AI for red teaming. They also discussed what it meant to become dependent on AI, and, more importantly, whether it was legal. IBM APAC Ecosystem’s CTO Purushothama Shnoy said that using AI in red teams could be beneficial

He predicted AI would speed up threat hunting by scouring data feeds, apps, and other sources for performance data as part of large automated workflows. Shenoy, however, told us that he is concerned that AI adopters will make the classic error of not considering the risks when they build these systems and other AI applications. Mert Mustafa is the APAC sales partner ecosystem GM at security shop eSentire. Kuo Yoong is the head of cloud for distributor Synnex Australia operations. She warned that generative AI doesn’t always explain how it generates its output. This makes it difficult for a red-team to explain or defend its actions to governance professionals or a court.

“AI can’t go on the stand and explain how it went through those activities to find threats,” Yoong explained.

  • Chinese cloud services target small and medium businesses in APAC to find growth.
  • AI Agent promotes itself as sysadmin and trashes boot sequence.
  • Global CEOs do not expect to see ‘normality’ return before 2022, and are investing in security.
  • Voice enabled AI agents can automate anything, including your phone scams.

Since criminals don’t care about legal concerns, Panelists at Canalys’s event suggested that AI would “transform” improve cyber security.

“We’re going to have to use more and more of it,” claimed Mustafa.

Nishant Jalan from Galaxy Office Automation, director of cybersecurity and network, suggested that there should be limits on the use of generative AI for cyber security in order to prevent overconsumption. He also argued for policies and regulations to govern it.

Perhaps opinions are premature

The Register asked other experts for their opinion and they questioned whether generative artificial intelligence is mature enough to be used by red teams. Email from Matthew Ball, Canalys analyst, to the Reg . The firm plans to do more research on this topic in the coming year. Kevin Reed, CISO of cyber security firm Acronis, told us that he believes AI is not yet ready to join red team but may be suitable for their cousins – penetration testers. Reed explained “Penetration tests focus on finding vulnerabilities in a system or network, testing technical controls and are usually pretty direct, while red teaming is more about testing organizational controls and staying undetected,” . Reed explained that some pentest efforts, which are already underway, have been successful at running commands during specific stages of an attack. However, they struggle with full automation.

“I think current LLMs don’t have enough memory to handle all the context needed,” concluded he. But is it legal?

In terms of legality, Bryan Tan, a partner at Reed Smith, a tech-centric law firm, believes that the right question to ask is: Who is responsible for the generative AI performing the pentest? His guess is that the operator who provides the pentest service is liable.

“This also means the operator (whether the company or its employee) will be the one hauled up to answer questions,” He added. The operator must be sure of what the AI is doing, or at least explain it so that there is transparency.

In regards to AI regulations, he called them “currently at a philosophical level.” . He also pointed out a number countries currently regulate pen tests, meaning that these laws may one day be changed to include AI. (r)

Read More

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version

Notice: ob_end_flush(): Failed to send buffer of zlib output compression (0) in /home2/mflzrxmy/public_html/website_18d00083/wp-includes/functions.php on line 5464