Mixmagic – Stock.adobe.com
Seven year after GDPR reshaped the data protection landscape, AI is forcing a new look at governance. The GDPR principles are still critical in guiding ethical AI, risk management and trusted innovation.
Published: July 30th 2025
James Hodge
It has been a little more than seven years since the GDPR came into effect – a pivotal moment that changed how organisations approach data privacy. We’ve seen UK companies of all sizes refine and audit their data strategies and consider the impact of each bit and byte that they collect.
As the post-GDPR world began to feel more familiar, AI entered the picture. AI is forcing businesses to rethink how they manage their data, much like GDPR did last year. The stakes have arguably increased – not only because businesses handle more data, but also because AI is moving at a breakneck pace.
GDPR practice
Since GDPR’s implementation in the year 2000, the shift from reactive to proactive compliance has been underway. It is now common to see proactive data governance . Data protection is no longer a legal formality, but a strategic imperative. It’s a topic that’s discussed in boardrooms and not just legal departments. The high-profile fines levied against tech giants have reinforced that data privacy and compliance are not optional.
While we should acknowledge and celebrate progress, it is also important to be honest with ourselves about the gaps that remain. The GDPR is still too often treated as an isolated exercise or hurdle to overcome, rather than as a continuous and embedded business process. This short-sighted approach exposes organisations not only to compliance risks, but also causes them to miss out on the real opportunity – regulation as an enabler.
When GDPR is understood and applied correctly it offers more than just a legal framework. It provides a clear and structured way to manage your data responsibly, improve operational hygiene and build trust with partners and customers. Strong data governance doesn’t hinder innovation, it makes it sustainable.
AI – Innovation, but with new risks
Enter AI. Businesses are recognizing the immense benefits and potential of AI. According to Cisco’s 2024 AI Readiness Index () shows that 95% of businesses have an AI strategy or are developing one. 50% of companies allocate 10-30% of IT budget to AI.
But 65% of IT teams claim they don’t understand the privacy implications when using AI. Only 11% of IT teams trust AI enough to handle mission critical workloads. Splunk’s 2025 State of Security Report). This tells us that AI is moving fast, but governance may still be catching up.
New risks are emerging: for example, the transparency issue feels urgent. Think of black-box models in areas such as fraud detection or credit scores – if it’s not transparent, it’s impossible to explain to a regulator how a decision has been made or to justify it to customers.
Compliance questions you should ask
With organisations embedding AI deeper into their operations it’s time for them to ask the hard questions about what data they’re feeding AI, who can access AI outputs and what processes are in place to respond to a breach and meet GDPR reporting deadlines.
Despite this urgency, there is still a gap in the number of organisations who do not have a formal AI Policy. This exposes organisations to serious privacy and compliance risks. Data loss prevention is a priority for many businesses.
What’s the good news? We don’t need to start over. GDPR provides a framework to evaluate AI tools : think data minimisation and privacy by design principles. This means collecting only the data that you need, using it to specific, legitimate ends, and embedding security into every stage of AI. You can build a solid foundation on these principles by applying them to AI. These aren’t only legal safeguards, they’re also the building blocks for ethical AI.
The GDPR laid the foundation
When we reflect on the seven-year journey of GDPR, one thing is obvious: its relevance hasn’t diminished. The arrival of AI made its principles even more important. The regulatory frameworks from yesterday are still useful, but only if applied with the same ambition and agility that new technologies require.
The task ahead is not just about following rules, but about demonstrating leadership. The future of data management will not be determined by regulators alone. Businesses will define the future of data governance. They’ll be bold enough to align integrity with innovation, and fast enough turn compliance into a competitive advantage.
The GDPR laid the foundation. In the age of AI we have the opportunity to build even stronger things on top.
James Hodge, GVP and Chief Strategy Advisor at Splunk.
The Government’s response to the Data Reform Bill consultation
By Alex Scroxton.
How the most significant data privacy trends in 2022 will affect your organisation.
By: Alan Calder.
The value of GDPR fines is expected to increase dramatically by 2020
James Hodge
It has been a little more than seven years since the GDPR came into effect – a pivotal moment that changed how organisations approach data privacy. We’ve seen UK companies of all sizes refine and audit their data strategies and consider the impact of each bit and byte that they collect.
As the post-GDPR world began to feel more familiar, AI entered the picture. AI is forcing businesses to rethink how they manage their data, much like GDPR did last year. The stakes have arguably increased – not only because businesses handle more data, but also because AI is moving at a breakneck pace.
GDPR practice
Since GDPR’s implementation in the year 2000, the shift from reactive to proactive compliance has been underway. It is now common to see proactive data governance . Data protection is no longer a legal formality, but a strategic imperative. It’s a topic that’s discussed in boardrooms and not just legal departments. The high-profile fines levied against tech giants have reinforced that data privacy and compliance are not optional.
While we should acknowledge and celebrate progress, it is also important to be honest with ourselves about the gaps that remain. The GDPR is still too often treated as an isolated exercise or hurdle to overcome, rather than as a continuous and embedded business process. This short-sighted approach exposes organisations not only to compliance risks, but also causes them to miss out on the real opportunity – regulation as an enabler.
When GDPR is understood and applied correctly it offers more than just a legal framework. It provides a clear and structured way to manage your data responsibly, improve operational hygiene and build trust with partners and customers. Strong data governance doesn’t hinder innovation, it makes it sustainable.
AI – Innovation, but with new risks
Enter AI. Businesses are recognizing the immense benefits and potential of AI. According to Cisco’s 2024 AI Readiness Index () shows that 95% of businesses have an AI strategy or are developing one. 50% of companies allocate 10-30% of IT budget to AI.
But 65% of IT teams claim they don’t understand the privacy implications when using AI. Only 11% of IT teams trust AI enough to handle mission critical workloads. Splunk’s 2025 State of Security Report). This tells us that AI is moving fast, but governance may still be catching up.
New risks are emerging: for example, the transparency issue feels urgent. Think of black-box models in areas such as fraud detection or credit scores – if it’s not transparent, it’s impossible to explain to a regulator how a decision has been made or to justify it to customers.
Compliance questions you should ask
With organisations embedding AI deeper into their operations it’s time for them to ask the hard questions about what data they’re feeding AI, who can access AI outputs and what processes are in place to respond to a breach and meet GDPR reporting deadlines.
Despite this urgency, there is still a gap in the number of organisations who do not have a formal AI Policy. This exposes organisations to serious privacy and compliance risks. Data loss prevention is a priority for many businesses.
What’s the good news? We don’t need to start over. GDPR provides a framework to evaluate AI tools : think data minimisation and privacy by design principles. This means collecting only the data that you need, using it to specific, legitimate ends, and embedding security into every stage of AI. You can build a solid foundation on these principles by applying them to AI. These aren’t only legal safeguards, they’re also the building blocks for ethical AI.
The GDPR laid the foundation
When we reflect on the seven-year journey of GDPR, one thing is obvious: its relevance hasn’t diminished. The arrival of AI made its principles even more important. The regulatory frameworks from yesterday are still useful, but only if applied with the same ambition and agility that new technologies require.
The task ahead is not just about following rules, but about demonstrating leadership. The future of data management will not be determined by regulators alone. Businesses will define the future of data governance. They’ll be bold enough to align integrity with innovation, and fast enough turn compliance into a competitive advantage.
The GDPR laid the foundation. In the age of AI we have the opportunity to build even stronger things on top.
James Hodge, GVP and Chief Strategy Advisor at Splunk.
By Alex Scroxton.
How the most significant data privacy trends in 2022 will affect your organisation.
By: Alan Calder.
The value of GDPR fines is expected to increase dramatically by 2020

