According to new research, cybersecurity leaders aren’t ready to adopt AI just yet. 80% of respondents preferred GenAI delivered via cybersecurity platforms. The CrowdStrike State of AI in Cybersecurity Survey found that there are mixed feelings about AI when it comes to privacy and safety controls, especially since Generative AI, which is still in its infancy and has significant security risks, is currently in its current form.
Security leaders are most concerned about the vulnerability of sensitive data due to Large Language Models and adversarial attacks against GenAI tools. They also worry about the tendency of GenAI to create hallucinations, as well as the lack of guardrails and inadequate public policy regulations.
For security experts by security experts.
Security leaders are taking steps in order to ensure that policies are used responsibly. 87% of respondents are either developing or have already implemented new security policies to govern AI adoption.
Are the risks greater than the rewards? Not really. While 39% of cybersecurity professionals believe that the benefits outweigh the risks, another 40% think they are comparable and 21% say the dangers outweigh rewards.
It’s not surprising that security workers think GenAI should be built specifically for cybersecurity. 76% of respondents preferred purpose-built solutions over generic, one-size fits all solutions.
This can be seen in the top factors that IT workers consider when making a purchase. The ability to improve an organisation’s response, detection of attacks, operational efficiency and reducing the impact of IT skills shortage are all top priorities.
Subscribe to the TechRadar Pro Newsletter to receive all the latest news, opinions, features, and guidance that your business needs to be successful!
As generative AI evolves, so does the cyber landscape. GenAI is being used more and more in automated security solutions and threat detection, with most organizations optimistic regarding AI’s future.