We are previewing RSA 2025, with Vigitrust CEO Mathieu GORGE who is looking forward to learning a lot about compliance and regulations as CIOs grapple with artificial intelligence and global upheavals.
we speak to Mathieu GORGE, CEO of Vigitrust about the ongoing impact artificial intelligence (AI), on data, storage, and compliance, for CIOs. Gorge discusses data, its volume, and the difficulty of keeping track of inputs, outputs, and AI processing. He also discusses the need to stay up-to-date with laws and regulations.
Gorge examines the impact of the new US administration and the changing approach of the European Union to AI data.
Which topics do you think will have the biggest impact on data storage, backups, and compliance at this year’s RSA?
Every year, I look forward to attending RSA in order to learn about the latest technologies and to keep up with what’s going on in terms of compliance and storage. This year it seems that we will see many items around AI, not just AI-enabled technology, but also the security of AI.
There is a lot of discussion about quantum and post-quantum, so it will be interesting to see how things develop.
From a storage perspective we’re seeing changes due to the new administration in US and what’s happening in the EU with their act on AI which impacts data classification and storage. It will be interesting to see how all of this comes together at RSA.
We’re going have some very interesting discussions, and I expect new vendors to emerge.
If you dig deeper into the areas you’ve discussed, what are you thinking of as the key areas where AI has progressed in the last year in terms how it impacts the compliance of organisations?
I think AI was the buzzword in 2017. Everyone needed to study AI to better understand how it can improve their processes, how they use data and so on.
One year later, we can see that many organisations have implemented ChatGPT and some have invested in their AI platforms to be able to control it better.
We’re seeing AI adoption grow – remember that AI has been around for years, but adoption is really picking-up at the moment.
We’re seeing people in the market asking: “What type of data can AI be used for?” How does this impact my data classification, data protection, and data governance?
A number of security associations are also starting their own AI Governance Working Groups. We have a working group for AI governance at Vigitrust. This is part of the Vigitrust Global Advisory Board. We are trying to map all the regulations that govern AI. These can be driven by technology vendors, associations, or even governments.
I’m curious to see how much AI governance will be covered at RSA. You need to know the type of data that you manage if you want to do AI Governance. We’re going to go back to data classification, and data protection.
Another issue with AI is the creation of a lot of data. We’ve got an explosion of data. Where will we store it? How will we store it? How secure will the storage be? Will I be able to demonstrate compliance with applicable frameworks and regulations? It will be interesting to see what RSA has to say on this front.
What are your thoughts on the impact of the new administration’s policies in the US?
From the start, the new administration in the US has stated that it will invest in AI. It also said that AI is a huge opportunity for the US. We know that governance frameworks already in place will be used to deploy all of this.
We see organisations like NIST develop more in-depth AI Frameworks. Cloud Security Alliance is also moving towards AI Governance Frameworks. We’ve seen cities develop their own AI frameworks, for smart cities etc. I’m currently thinking about the city of Boston for some reason.
If you have a government that pushes organisations to use AI, then they will want some governance. It will be interesting to see where they go. Will they respond with an equivalent of the EU AI Act? It’s likely because if you compare GDPR [the General Data Protection Regulation] to the CCPA [the California Consumer Privacy Act] that was passed a few years after GDPR in Europe, and now we have state regulations in place – I believe 11 states in America have something similar.
It’s likely that this will happen. I don’t think it will happen overnight, but the current administration is likely to make some more announcements in 2025.
What is the latest on the EU and compliance? What’s the latest on AI, etc.?
It’s funny that in the EU AI is seen both as a threat and as an opportunity. This is much more than in the US. This could be because the risk appetite in Europe is a bit less evident.
Every member state is looking at their own AI regulations in addition to the EU Framework. We’re also looking into how AI integrates GDPR. In other words, when you deploy AI solutions you completely change the governance of your data.
The data is now managed primarily by a computer system, rather than by people. The concept of a controller and who is in charge of data is questioned.
It’s interesting to watch the different governments look at the question, “Can we deploy AI in a manner that doesn’t put us out-of-compliance for GDPR?”.
Let me go back to the two key aspects of classifying the data and storage the data.
You know that with AI, there’s the question of bias in data. Is the data being treated correctly? Is the data you enter – it is then treated by AI and it comes out – does that put you in or out compliance with other frameworks such as GDPR, or even the EU Act. Where should you store this data? What protection should it have? How do you manage data lifecycles within the AI framework? How do you protect LLM [large language model]? How do you protect your algorithms?
Lastly, AI is a resource-intensive technology. This also has an effect on the climate because the more AI you use, the more processing power and capacity you will need. This has an impact on green IT, and so on.
I would encourage people to consider the type of data that they want to use to create AI, to do a risk assessment, and to then think about the impact of where you are going to store the data. Who will store it for you, and how? How secure will it be? How will this impact your compliance with AI regulations, as well as GDPR and other privacy guidelines?
Navigating AI legislation and regulation
By Mary Branscombe (19659051]