Security Think Tank examines how CISOs should plan to ensure the safe running of AI and Gen AI initiatives, and that employees do not make bad decisions or leak data inadvertently.
By
- Elliott Wilkes, ACDS
Published: 03 Jun 2025
Genes tools are becoming embedded in enterprise operations. They bring with them transformative promises, but also significant risk.
The challenge for CISOs is to facilitate innovation while securing the data, maintaining compliance across border, and preparing themselves for the unpredictable nature large language models and AI agent.
The stakes for CISOs are high. A compromised or poorly governed AI could expose sensitive data, break global data laws, and make critical decisions using false or manipulated inputs.
To reduce these risks, CISOs need to rethink their cyber-security strategies and policies in three key areas: data usage, data sovereignty, AI safety.
Data Use: Understand the terms before sharing vital data
The biggest risk in AI adoption are not malicious actors, but ignorance. Too many organisations integrate third party AI tools without understanding how their data is used, stored, and shared. Most AI platforms are taught using vast amounts of public data that is scraped off the internet. This is done without much regard for the origin.
Although the major players in the industry like Microsoft and Google have begun to embed more ethical safeguards into their terms of services, much of the fine text remains opaque and is subject to change.
This means that CISOs will have to rewrite their data-sharing policies and procurement checks. AI tools should be treated like third-party vendors who have high-risk access. Before deployment, security teams should audit AI platform terms and conditions, assess how enterprise data may be retained or re-used, and ensure opt outs are in place when possible.
By investing in AI governance experts or external consultants who understand these nuanced agreements, organisations can be protected from accidentally sharing proprietary information. Data used in AI must be treated as a valuable export that is carefully tracked, regulated, and considered.
Data Sovereignty: Guardrails for borderless technology
The blurring of geographic boundaries in AI integration can be a hidden danger. What is legal in one country might not be legal in another.
This creates a minefield for multinationals of potential regulatory violations, especially under acts such DORA (19459051) and the forthcoming UK Cyber Security and Resistance Bill and frameworks such as the EU’s GDPR and the UK Data Protection Act.
CISOs need to adapt their security strategies in order to ensure AI platforms are aligned with regional data sovereignty needs. This means reviewing where AI systems reside, how data flows across jurisdictions, and if appropriate data transfer mechanisms, such as standard contractual provisions or binding corporate rules, are in place.
When AI tools lack adequate localisation and compliance capabilities, security teams should consider geofencing or data masking or even local AI deployments.
Updates to policy should require that data localisation preferences are enforced for sensitive datasets or those regulated by law, and AI procurement should include questions about cross-border handling of data. In the end, ensuring that data is within the bounds for compliance is both a legal and a security issue.
Safety – Designing resilience into AI deployments.
The last pillar of AI Security is to protect systems from the growing threat manipulation. It could be through model hallucinations or insider misuse.
Although still a new threat category, prompt infusion has become one the most discussed vectors of GenAI security. By carefully crafting input strings, attackers are able to override expected behavior or extract confidential data from a model. In extreme cases, AI models have hallucinated harmful or bizarre outputs. One system, it is reported, refused to be shut down.
CISOs must take two steps to address this issue. First, internal controls, red-teaming, and traditional penetration tests should be adapted for stress-testing AI systems. Techniques such as chaos engineering can simulate edge cases and reveal flaws prior to their being exploited.
Secondly, there must be a shift in the way vendors are selected. Security policies should favor AI providers that demonstrate rigorous testing, robust security mechanisms, and clear ethics frameworks. These vendors may be expensive, but the cost of relying on an untested AI tool could be much higher.
To strengthen accountability, CISOs must also advocate for contracts which place responsibility on AI vendors in the event of operational failures or unsafe outcomes. A well-written contract should address liability, incident responses procedures, and escalation paths in the event of a failure or breach.
Gatekeeper to enabler
With AI becoming a core component of business infrastructure, CISOs will need to evolve from gatekeepers to enablers. To unlock the full potential of GenAI, it will be necessary to update policies around data usage, strengthen controls over data sovereignty and build a layered security net for AI deployments. This is without compromising compliance, trust or integrity.
The only way to combat the rapid changes brought about by AI is through proactive, strategic adaptation, rooted in collaboration, knowledge, and a relentless focus on responsibility.
Elliott Wilkes, CTO of Advanced Cyber Defence SystemsWilkes is a seasoned digital leader and product manager with over a decade’s experience working for both the American government and the British government. Wilkes was most recently a cyber security advisor to the Civil Service.