Home News AI and compliance: Staying in the right side of the law and...

AI and compliance: Staying in the right side of the law and regulations

0
AI and compliance: Staying in the right side of the law and regulations

Without careful preparation, AI projects run the risk of stumbling into a legal or regulatory minefield. We examine the risks of hallucinations and basic errors, as well as the upcoming regulation

.

by

Published: 29 May 2025

The legal and regulatory frameworks for artificial (AI) intelligence are currently behind the technology’s adoption.

The rise in generative AI (GenAI), has brought artificial intelligence to forefront of modernisation plans for organisations, but most development has occurred in a regulatory vacuum.

Regulators have been rushing to catch-up. Gartner, an industry analyst, says that between the first quarter 2024 and Q1 2020, more than 1,000 proposed AI regulations were introduced around the world.

Gartner vice president analyst Nader Henein warns that the regulatory environment in which AI projects will be implemented “will be a complete mess”. Chief information officers (CIOs), therefore, need to act immediately to ensure compliance.

Mistakes by AI suppliers have led to a number of problems including bias, privacy and security breaches. AI can produce answers that are not based in reality .

One of the most high-profile Hallucinations are a good example of AI problems . The AI application, usually GenAI or large language models (LLM), produces a result that is not based in fact.

Some even suggest that the latest GenAI versions hallucinate more. OpenAI’s research revealed that OpenAI’s O3 and O4-mini models were more susceptible to hallucinations.

GenAI can make mistakes and bias

Basic mistakes, errors of facts and bias are all possible. This depends on the data used to train the system, and the way the algorithms work. Bias can cause results that are offensive or discriminatory. This is a concern for all AI users but particularly in areas like healthcare, law enforcement and recruitment.

Governments and industry regulators are increasingly trying to control AI or at least ensure AI apps operate within existing privacy laws, employment laws, and other regulations. Some go further, like the European Union (EU), with its AI Act. More regulation is inevitable outside the EU. Gartner’s Henein says

: “At the moment, there is very little regulation in the UK.” “Both the ICO (19459104) and Chris Bryant, minister of state for the Department for Science, Innovation and Technology, stated that AI regulations are expected within the next 12-18 months.

We do not expect the AI Act to be a carbon copy of the EU AI Act. However, we do anticipate some degree of alignment in particular regarding high-risk AI system and potential prohibited uses of AI.

AI Laws and Governance

AI’s governance is governed by many laws and regulations, some of which overlap. There are also data privacy and security legislation, as well as guidelines and frameworks that set standards for AI use in areas where legal sanctions may not be available.

According to Efrain Ruh of Digitate, the EU AI Act and other AI regulatory frameworks are based on a risk assessment, namely the risks that these new technologies may pose to people.

However, the wide range of applications in this area and the rapid pace of innovation make it difficult for regulators define specific controls around AI technology.

The plethora rules also makes it difficult for organisations to comply. According to research by AIPRMa firm that helps smaller businesses make the most out of GenAI, the US has 82 AI policies and strategies, the EU has 63, and the UK has 61.

Among these, the stand-out law is the EU’s Artificial Intelligence Act, and its first “horizontal” AI law governing AI, regardless of where or how it is used. But the US’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence also sets standards for AI security, privacy and safety.

In addition, international organisations such as the OECD, the UN and the Council of Europe have developed AI frameworks. But the task facing international bodies and national law makers is far from easy.

According to White & Case, an international law firm that tracks AI developments“governments and regulatory bodies around the world have had to act quickly to try to ensure that their regulatory frameworks do not become obsolete…

“But they are all scrambling to stay abreast of technological developments, and already there are signs that emerging efforts to regulate AI will struggle to keep pace,” it says.

This, in turn, has led to different approaches to AI regulation and compliance. The EU has adopted the AI Act as a regulation, meaning it applies directly in law in member states.

The UK government has so far opted to instruct regulators to apply guiding principles to how AI is used across their areas of responsibility. The US has chosen a mix of executive orders, federal and state laws, and vertical industry regulation.

This is all made more difficult still by the absence of a single, internationally accepted definition of AI. That makes regulation and compliance by organisations that want to use AI harder. Regulators and firms have had time to learn how to work with regulations such as the General Data Protection Regulation (GDPR), but we are not yet at that stage with AI.

“As with other regions, there is a fairly low level of maturity when it comes to AI governance,” Says Gartner’s Henein. “Unlike GDPR, which followed four decades of organic development in privacy norms, AI regulatory governance is new.”

Compliance with the AI Act, he adds, is made more complicated because it applies to AI features of technology, not just to whole products. CIOs and compliance officers now need to account for AI capabilities in, say, software as a service applications they have been using for years.

Moving to compliance

Fortunately, there are steps organisations can take to ensure compliance.

The first is to ensure CIOs know where AI is being used across the organisation. Then they can review existing regulations, such as GDPR, and ensure that AI projects keep to them.

But they also need to monitor new and developing legislation. The AI Act, for example, mandates transparency for AI and human oversight, notes Ralf Lindenlaub, chief solutions officer at Sify Technologies.

Boards, though, are also increasingly aware of the need for “responsible AI”with 84% of executives rating it as priority, according to Willie Lee, a senior worldwide AI specialist at Amazon Web Services.

He recommends that all AI projects are approached with transparency, and accompanied by a thorough risk assessment to identify potential harms. “These are the core ideals of the regulations being written,” says Lee.

Digitate’s Ruh says: “AI-based solutions need to be built up-front with the correct set of guardrails in place. Failure to do so might result in unexpected events with tremendous negative impact on the company’s image and revenue.”

Read more on AI and storage

  • Podcast: RSA 2025 to grapple with AI compliance, US and EU regulation

    By: Antony Adshead

  • Preparing for AI regulation: The EU AI Act

    By: Cliff Saran

  • Navigating the practicalities of AI regulation and legislation

    By: Mary Branscombe

  • Is your business ready for the EU AI Act?

    By: Kashyap Kompella

www.aiobserver.co

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version