Podcast: We speak with Vigitrust CEO Mathieu GORGE about staying on top compliance and artificial intelligence. The danger is data that can change during AI processing, and also proliferate.
In this podcast we speak to Mathieu GORGE, CEO of Vigitrustabout the compliance risks that data poses during artificial intelligence (AI), and training in particular. As datasets are analyzed, more data is generated. It can be difficult to ensure compliance, especially when the data grows.
Gorge discusses the need to understand what is being fed into AI and what comes out. He also talks about where it goes, how it’s stored and who has access.
Gorge also discusses the security and compliance frameworks which can be used, and the need to integrate AI compliance into the organisational security culture.
What is the latest in AI and compliance with regard to storage and backup that a CIO should be aware of?
AI adoption is growing everywhere, and we’ve even seen the EU implement some AI regulations.
Some frameworks have also adapted to AI. For example, NIST has an AI framework. Some security associations have pushed for their own standards. I can think about the Cloud Security Alliance but also of working groups from ISSA and Isaca. All of them provide guidance.
What we need to think about is that we will most likely see more AI-related regulations. Some of the regulation will be national and some will be federal. Others will be international. It’s important to compare the evolution of cyber standards with AI standards, governance standard.
In the beginning, around 25 years ago there were about 100 standard on network security, IT and data security. Today, we are only able to dial back about five or six standards, such as HIPAA and PCI. My hope is to do the AI deployments in a similar way, but faster, so we can focus on managing AI deployments based on data classification, privacy, and storage.
What is AI governance? AI governance in the US, EU, and other countries is a matter of saying: “Well we have this new way to process data.” We need to know where the data comes from. The data is in a certain form.
[Questions include:]
Is it [of AI processing] a different type of data file, or other data form?
Does that put us out of compliance with the law? Is this facilitating compliance? Do we have safeguards in place to protect who has access to the data?
Do you have safeguards in place to protect the data we store? How long must we keep it?
Depending on where we are based, how long will we have to report this data? Where should we store the data when we store it?
The problem with AI is that we are creating a lot more data than we did before. We’re creating more data than ever before, and this data must be stored somewhere.
It needs to be stored so that you don’t fall out of compliance. You need to monitor your AI ecosystem, and regulate the data that comes in, the data that goes out, and who has access to it.
Given the complexity of AI operations, how should the CIO go about ensuring compliance?
The CIO should understand what information goes into AI. The chief information officer has to manage the information that enters the system, that leaves it, that can be accessed, and how. I would recommend that every CIO work with their CSO, or their security team, and look at global AI policy and regulation.
I would also recommend that you look at the IAPP (International Association of Privacy Professionals). Their website offers an AI law tracker which can help you understand the different frameworks, their requirements and how they relate to data classification, data storage, data deployment and compliance.
If your goal is to promote AI solutions and AI deployments you must also promote a culture that embraces data management, information security, and management. Mathieu Gorge from Vigitrust says that if you don’t, you could be in breach of compliance.
In the same way that we train our staff for email, social networking and other things, the CIO at board level should push the concept of integrating AI not only in the business culture, but also the information and data management and security culture of the system.
If you are pushing AI deployments and AI solutions, you should push a data management, information and security culture along with it. You will be out of compliance if you don’t.
Look at your ecosystem and how you plan to use AI across multiple systems. You can also look for an AI policy tracker. Then, apply this to your policy to ensure that it quickly becomes a part of your organisation’s DNA.
Because AI will continue to be deployed. There will be more AI-based business solutions.
Will it benefit your data? Will it make data management more complex? If you don’t manage the process, but use good AI governance frameworks and try to distill them down to what’s important to your organisation, then you’re on a good strategy for AI implementation and AI compliance.
The CEO of Antony Adshead says that Quantum is not yet profitable, but will be soon.
and
by: Antony Adshead.
and
podcast: AI data requires scalable flash but also needs to FAIR.