Threats and potential benefits: Weighing the enterprise risk of adopting AI

Throwing Artificial intelligence (AI) tools will probably produce mixed results if you throw them at a wall to see what sticks. To maximise the opportunities, it is important to identify and minimize potential risks early.

After all, even well-resourced Dael Williamon, EMEA Chief Technology Officer at Databricks and AI software provider Databricks confirms that companies are still struggling with their AI approach.

For instance, copying and paste from one proprietary thing to another and then another comes with an inherent “tax” on Data integrityYou need to have all the checks and balanced. Williamson says that this is a problem for all companies, as they all have silos.

Inferencing suffers if your data is flawed or incorrect. You will not get the ROI (return on investment). There’s also the risk of selecting the wrong data. Language model tailored to your needs.

You have to train models. But Williamson says that the inference [of] is what you are really aiming for. “AI is incredibly useful. It’s also tricky.

Securing AI presents risk and not only from AI-enabled threats, such as more advanced social engineering, prompt injects or slopsquatting. Richard Cassidy is the EMEA Chief Information Security Officer at cloud data management firm Rubrik. He says that if you don’t focus on the “how” to achieve AI goals, it can lead to a variety of security concerns.

For example, AI can be a “noise maker” that distracts from real incidents and increases waste and cost. Moreover, security controls that are carefully designed may not be carried over to the AI workflow. He says that on top of this, digital skills may be lacking and workflows are often not sufficiently digitised. AI cannot fix flawed processes if the underlying processes themselves are flawed. It will only amplify the chaos

Richard Cassidy Rubrik

Assessment and prioritisation of risk

People don’t ask how AI adoption looks in practice, he says. “CISOs can create data lakes of epic proportions with multifactor authentication and user attribution. Secure access is also possible.” Then AI comes in and maps a numerical model into its workflow. It embeds models, then vector databases. It gets the outputs via retrieval augmented-generation (RAG) workflows, and so on. And the security controls are gone.

These figures match Office of National Statistics (ONS), which suggests that the most common barriers for AI adoption are difficulty identifying business use cases or activities (39%) and costs (21%). Some 16% of companies cited a shortage of AI skills and expertise.

If the underlying processes have flaws, AI will not fix them. Cassidy says that it will only increase chaos.

As always, start by focusing on the problem and not the hype. Don’t adopt AI because you feel you should. Make sure you identify specific business challenges. Customer service bottlenecks, or slow cycles, and build from there.”

Was it successful? Did it increase or decrease value? Cassidy says to learn from this and build a road map based on evidence, not enthusiasm.

Additional mitigation strategies

Tony Lock, distinguished analyst with IT market watcher Freeform Dynamics, says that you don’t want to jump right into AI, nor do you want to plug in all your sensitive data or regulated data to a model off-the shelf to train it.

Once you enter data into the language models, you cannot take it back out. Lock says that the data is subsumed by the pattern. “That’s the reason RAG exists, so that instead of feeding information to an LLM, it cleans everything.”

What if your model was pulled from the marketplace? Open source, parallel development, and API gateways are all ways to protect organisations. Lock says we can’t know how risks will unfold if, for example, OpenAI loses a lawsuit in progress about its rights to use intellectual property of others.

You can’t remove data from the language model once you have put it in. It is just subsumed in the pattern. RAG was created to help you clean up your data instead of putting it into an LLM.
Tony Lock from Freeform Dynamics.

Lock: “If a judge tells you to remove all the information, or that you are not allowed to use this information for training, you will likely have to start over with the language model, using properly secured data you have acquired.”

There could be penalties. How will AI suppliers respond? Will they pass the costs on to their customers? Will the customers be penalised themselves? These are questions that may require specific legal advice.

It is a good idea to consider the possibility of multiple AI-related lawsuits before you make a bet on a specific model.

National regulation is complicating the environment. The UK government, for example, is currently in favor of a yet-to-be developed sort of “opt-out” AI process for intellectual-property (IP) owners.

But in the European Union for example, this will not work because everything is “opt-in”notes Lock. To opt in, users must be informed exactly how their IP will be used.

ā€œMaybe US courts won’t enforce action. He says that all of those companies could be held liable if they had European, UK or Japanese subsidiaries.

It can also pay to wait. There can only be one “first mover”. Later entrants could benefit from the relative lack of obstacles faced by early adopters.

Top recommendation

Databricks Williamson recommends enterprises to get their data house in shape first, even if it delays adoption. “Data processing is difficult, even for large companies with a lot of money and an in-house team,” Williamson says.

Usually, Data is not AI-ready. This means that all structured and non-structured data must be inventoried, audited and mapped. He points out that a cleaner, deduplicated and accurate data foundation could require silo consolidating, before adding AI to the top.

The positive news is that fixing data in a broader sense will buy enterprises time to think about their approach and generate benefits for the entire business, including cost savings and storage efficiencies. Rubrik’s Cassidy says that opportunities typically revolve around “smart delegation” and the democratisation data-based intelligence throughout the business. “AI offers SMEs a real leveling-up capability.”

Implementation plans and timelines

Robbie Jerrom is a senior principal technologist at Red Hat and says enterprises should spend as much time as necessary figuring out what they want to do with AI.

First, understand your needs, then narrow down the use case. Jerrom says, “Don’t try boiling the ocean.”

Organisations can calculate the tokens needed for a specific AI enablement. However, it is not always simple.

Understand your need and then narrow down the use case. Don’t try and boil the ocean.
Robbie Jerrom from Red Hat

ā€œWriting some small pieces of Python code could use up to 45,000 tokens. If you convert it to cost, it’s probably a couple of pennies. How much would it cost if 10 developers did it all day? “Every time an AI agent talks to something for example, it uses tokens.” Sandboxing reduces risk, especially for more autonomous systems like agents. Consider whether the system can be trained on static policies of the company, for instance.

You could ask a model for a review of a contract. Compare it to previous contracts and point out any differences, confusions, or irregularities. The model may highlight something else to consider in addition to two irregularities. For example, changes over time could indicate a potential challenge in the relationship with customers that was not previously recognized.

AI helps you discipline your thinking and apply methods. Double-check the results and reevaluate. Can you tweak the model to better match need, or try something else?

Jerrom says that you will start to see the value in some of the more boring use cases. Generative AI (Genai) makes mistakes, so do humans.

Education for workers and training are equally important. Most will require assistance in learning how to best use their AI services.

Jerrom warns, “This could get you in a lot more hot water.” “AI is already everywhere.”

Next steps for enterprise AI adoption.

Sue Daley is the director of technology and innovations at TechUK. She says that all AI has a “huge potential” in business. It is important to understand how AI can improve efficiency and effectiveness, regardless of the size, shape or sector. “What are you trying to achieve?”

Is AI the right tool for you? Some benefits may be agentic while others require a small-language model or a very specific approach.

Small language models may be better suited to a specific issue or business need in their supply chain or logistics. Daley says that context will be crucial.

Play in a sandbox to learn about AI. Examine compliance and security policies, practices, and ethics surrounding responsible innovation. Consider upskilling needs. Build cross-functional teams and get people’s perspective.

Start with education and awareness. Daley says that you should consider your organisation on all levels, including the board level, middle management and individual employees. Find ways to involve people in your journey. It’s a process of change management that affects many people’s careers.

Even though enterprises may think of GenAI tools simply as another chatbot, most chatbots do not satisfy customers. To reap the benefits of AI, it is important to think about how you will evolve your product or version in the future. She says that the outputs are only as good as the data inputs.

Freeform Dynamics’ Lock adds: ā€œUnderstand how AI works so that your people say it helps them rather than being something else to ā€˜get around’. Remember that some people may be doing things that you didn’t think of, or are doing things they shouldn’t. User effectiveness and satisfaction are crucial.”

Lastly, don’t overlook the different classes of AI. Some businesses may have experience with them.

www.aiobserver.co

More from this stream

Recomended