Home Industries Government and Public Policy The modern ROI imperative: AI deployment, security and governance

The modern ROI imperative: AI deployment, security and governance

0

Ahead of the TechEx North America event on June 4-5, we’ve been lucky enough to speak to Kieran Norton, Deloitte’s US Cyber AI & Automation leader, who will be one of the speakers at the conference on June 4th. Kieran’s 25+ years in the sector mean that as well as speaking authoritatively on all matters cybersecurity, his most recent roles include advising Deloitte clients on many issues around cybersecurity when using AI in business applications.

The majority of organisations have in place at least the bare minimum of cybersecurity, and thankfully, in most cases, operate a decently comprehensive raft of cybersecurity measures that cover off communications, data storage, and perimeter defences.

However, in the last couple of years, AI has changed the picture, both in terms of how companies can leverage the technology internally, and in how AI is used in cybersecurity – in advanced detection, and in the new ways the tech is used by bad actors.

As a , AI can be used in network anomaly detection and the smart spotting of phishing messages, among other uses. As a , AI means that the enterprise has to be proactive to ensure AI is used responsibly, balancing the innovation AI offers with privacy, data sovereignty, and risk.

Considered a relatively new area, AI, smart automation, data governance and security all inhabit a niche at present. But given the growing presence of AI in the enterprise, those niches are set to become mainstream issues: problems, solutions, and advice that will need to be observed in every organisation, sooner rather than later.

Governance and risk

Integrating AI into business processes isn’t solely about the technology and methods for its deployment. Internal processes will need to change to make best use of AI, and to better protect the business that’s using AI daily. Kieran draws a parallel to earlier changes made necessary by new technologies: “I would correlate [AI] with cloud adoption where it was a fairly significant shift. People understood the advantages of it and were moving in that direction, although sometimes it took them more time than others to get there.”

Those changes mean casting the net wide, to encompass the update of governance frameworks, establishing secure architectures, even leveraging a new generation of specialists to ensure AI and the data associated with it are used safely and responsibly. Companies actively using AI have to detect and correct bias, test for hallucinations, impose guardrails, manage where, and by whom AI is used, and more. As Kieran puts it: “You probably weren’t doing a lot of testing for hallucination, bias, toxicity, data poisoning, model vulnerabilities, etc. That now has to be part of your process.”

These are big subjects, and for the fuller picture, we advocate that readers attend the two talks at TechEx North America that Kieran’s to give. He’ll be exploring both sides of the AI coin – issues around AI deployment for the business, and the methods that companies can implement to deter and detect the new breed of AI-powered malware and attack vectors.

The right use-cases

Kieran advocates that companies start with smaller, lower-risk AI implementations. While some of the first sightings of AI ‘in the wild’ have been chatbots, he was quick to differentiate between a chatbot that can intelligently answer questions from customers, and agents, which can take action by means of triggering interactions with the apps and services the business operates. “So there’s a delineation […] chatbots have been one of the primary starting places […] As we get into agents and agentic, that changes the picture. It also changes the complexity and risk profile.”

Customer-facing agentic AI instances are indubitably higher risk, as a misstep can have  significant effects on a brand. “That’s a higher risk scenario. Particularly if the agent is executing financial transactions or making determinations based on healthcare coverage […] that’s not the first use case you want to try.”

“If you plug 5, 6, 10, 50, a hundred agents together, you’re getting into a network of agency […] the interactions become quite complex and present different issues,” he said.

In some ways, the issues around automation and system-to-system interfaces have been around for close on a decade. Data silos and RPA (robotic process automation) challenges are the hurdles enterprises have been trying to jump for several years. “You still have to know where your data is, know what data you have, have access to it […] The fundamentals are still true.”

In the AI era, fundamental questions about infrastructure, data visibility, security, and sovereignty are arguably more relevant. Any discussions about AI tend to circle around the same issues, which throws into relief Kieran’s statements that a conversation about AI in the enterprise has to be wide-reaching and concern many of the operational and infrastructural underpinnings of the enterprise.

Kieran therefore emphasises the importance of practicality, and a grounded assessment of need and ability as needing careful examination before AI can gain a foothold. “If you understand the use case […] you should have a pretty good idea of the ROI […] and therefore whether or not it’s worth the pain and suffering to go through building it.”

At Deloitte, AI is being put to use where there is a clear use case with a measurable return: in the initial triage-ing of SOC tickets. Here the AI acts as a Level I incident analysis engine. “We know how many tickets get generated a day […] if we can take 60 to 80% of the time out of the triage process, then that has a significant impact.” Given the technology’s nascence, demarcating a specific area of operations where AI can be used acts as both prototype and proof of effectiveness. The AI is not customer-facing, and there are highly-qualified experts in their fields who can check and oversee the AI’s deliberations.

Conclusion

Kieran’s message for business professionals investigating AI uses for their organisations was not to build an AI risk assessment and management programme from scratch. Instead, companies should evolve existing systems, have a clear understanding of each use-case, and avoid the trap of building for theoretical value.

“You shouldn’t create another programme just for AI security on top of what you’re already doing […] you should be modernising your programme to address the nuances associated with AI workloads.” Success in AI starts with clear, realistic goals built on solid foundations.

You can read more about  and sign up to attend. Visit the Deloitte team at booth #153 and drop in on its sessions on June 4: ‘Securing the AI Stack’ on the AI & Big Data stage from 9:20am-9:50am, and ‘Leveraging AI in Cybersecurity for business transformation’ on the Cybersecurity stage, 10:20am – 10:50am.

Learn more about Deloitte’s solutions and service offerings for AI in  and  or email the team at .

(Image source: “Symposium Cisco Ecole Polytechnique 9-10 April 2018 Artificial Intelligence & Cybersecurity” by Ecole polytechnique / Paris / France is licensed under CC BY-SA 2.0.)

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version