The case for embedding audit trails in AI systems before scaling up

Credit: VentureBeat. Generated with ChatGPT.

Join the event trusted for over two decades by business leaders. VB Transform brings the people who are building enterprise AI strategies together. Learn more


Editor’s note: Emilia is leading an editorial roundtable discussion on this topic this month at VB Transform. Register today

Orchestration Frameworks for AI Services serve multiple functions for enterprise. They should not only define how applications or agents work together, but also allow administrators to manage workflows and agents as well as audit their systems.

As businesses begin to scale AI services and put them into production, a robust, manageable, auditable, and traceable pipeline will ensure that their agents perform as expected. Without these controls, organizations might not be aware of the state of their AI systems, and may only find out too late when something goes wrong, or they fail to meet regulations.

Kevin Kiley is the president of enterprise orchestration. Aharitold VentureBeat that frameworks should include auditability and tracability.

It’s important to have this observability, and to be able go back to the log and show again what information was provided when,” Kiley said. “You need to know if the information was shared by a bad actor or an employee who didn’t realize they were doing so, or if this was a hallucination. You need to keep a record.”

Robustness and audit trails are best built into AI systems from the very beginning. Understanding the risks associated with a new AI agent or application and ensuring that they continue to perform according to standards prior to deployment will help ease concerns about putting AI in production.

Organizations did not initially design systems with auditability and traceability in mind. Many AI pilot programs started out as experiments without an orchestration layer and an audit trail.

The biggest question facing enterprises is how to manage agents and applications. They must also ensure that their pipelines are robust, and if anything goes wrong they can identify the problem and monitor AI performance.

Selecting the right method

Experts say that before building any AI applications, organizations should take stock of their data. If a company has a baseline of data that they can compare performance against, they will be able to determine which data is acceptable for AI systems to use and which data was used to fine-tune a model.

When you run some AI systems, the question is more about what kind of data I can validate that my system is actually running properly or if it’s not. YrieixGarnier, vice-president of products at DataDogtold VentureBeat during an interview. “That’s hard to do, to understand that i have the right system to validate AI solutions.” These datasets, models, applications that use specific models or agents and authorized users, as well as the baseline runtime numbers, can be loaded in either the orchestration platform or the observability platform.

Orchestration teams must consider transparency and openness, just as they do when selecting foundation models to build on. While some closed-source systems offer many advantages, open-source platforms can also provide benefits that some companies value, like increased visibility into decision making systems.

Open source platforms like MLFlow. LangChain To gravel provides agents and models with flexible and granular instructions. DataDog is a platform that allows enterprises to build their AI pipeline using a single, integrated tool. AWS.

Another option for enterprises is to integrate a system which maps agent and application responses with compliance tools or responsible AI policy. AWS and Microsoft and Googleoffer services that track AI and how closely it adheres to the guardrails and policies set by users.

Kiley stated that enterprises should consider a transparent system when building these reliable pipelines. Kiley believes that not being able to see how AI systems operate is a bad thing.

No matter what the industry or use case, there will be situations where flexibility is needed. A closed system won’t work. There are some providers who have great tools but they’re hidden in a blackbox. I don’t understand how it makes these decisions. “I don’t have any ability to interject or intercept at points that I would like to,” he said.

Join the conversation on VB Transform

where I will be leading an editorial discussion I would love to invite you to join me at VB Transform 2020on June 24-25 in San Francisco for a session called “Best Practices to Build Orchestration Frameworks for Agentic AI.” Register today

VB Daily provides daily insights on business use-cases

Want to impress your boss? VB Daily can help. We provide you with the inside scoop about what companies are doing to maximize ROI, from regulatory changes to practical deployments.

Read our privacy policy

Thank you for subscribing. Click here to view more VB Newsletters.

An error occured.

www.aiobserver.co

More from this stream

Recomended