Home AI Companies News Anthropic IBM says enterprise customers are using AI ‘everything.’ The challenge is matching...

IBM says enterprise customers are using AI ‘everything.’ The challenge is matching LLM with the right use case.

0
IBM says enterprise customers are using AI ‘everything.’ The challenge is matching LLM with the right use case.

Join the event trusted for over two decades by business leaders. VB Transform brings the people who are building enterprise AI strategies together. Learn more

The last 100 yearsBMand Ihave seen many different tech-trends rise and fall. What tends win out is technology where there are choices.

At VB Transform 2030 Today, Armand Ruiz from IBM, VP of AI Platform, detailed how Big Blue thinks about generative AI, and how its enterprise customers are actually deploying this technology. Ruiz stressed that it is not important to choose a single large-language model (LLM), or technology, at this stage. Enterprise customers are increasingly rejecting single vendor AI strategies and opting for multi-model approaches which match specific LLMs with targeted use cases.

IBM’s open-source AI models are part of the Granite family. However, it does not position this technology as the best or only option for all workloads. This enterprise behavior is what drives IBM to position itself as a control center for AI workloads, not as a competitor to foundation models.

Ruiz said, “When I’m in front of customers, they use everything they can,” he explained. “For coding they love Anthropic, and for other use cases, like reasoning, they prefer o3, and then for LLM customisation, with their data and fine tuning they prefer either our Granite series, Mistral’s small models, or Llama…it is just matching the LLM with the right use case. We also help them to make recommendations.

The Multi-LLM Gateway Strategy

IBM has responded to this market reality with a newly released model that allows enterprises to switch between LLMs using a single API while maintaining observability, governance and consistency across all deployments.

Customers can run open-source models in their own inference stacks for sensitive use cases, while simultaneously accessing APIs such as AWS Bedrock and Google Cloud’s Gemini. Ruiz explained that the gateway provides customers with a single API for switching from one LLM model to another LLM, and adding observability and governance throughout.

This approach directly contradicts common vendor strategies that lock customers into proprietary ecosystems. IBM is not the only vendor to take a multi-vendor model selection approach. In recent months, multiple tools for model routing have been developed to help direct workloads towards the right model.

Agent orchestration protocols are becoming a critical infrastructure

IBM is tackling this emerging challenge through open protocols.

IBM has developed ACP (Agent Communication Protocol), and contributed it to Linux Foundation. ACP is a competitor to Google’s Agent2Agent protocol (A2A), which was just contributed to the Linux Foundation by Google this week.

Ruiz stated that both protocols are designed to reduce the amount of custom development and facilitate communication between agents. He believes that the different approaches will eventually converge. Currently, the differences between A2A (Agent to Agent) and ACP (Agent to Client Protocol) are mostly technical.

Agent orchestration protocols are standardized ways that AI systems can interact across platforms and vendors.

When considering the enterprise scale, it becomes clear that the technical significance is significant. Some IBM customers have already over 100 agents in their pilot programs. Each agent-to-agent communication requires custom development without standardized communication protocols. This creates an unsustainable integration burden.

AI is about transforming the way we work and how we do it

Ruiz believes that AI must be more than chatbots to have a positive impact on enterprises today.

Ruiz said that if you’re just using chatbots or are only trying to save money with AI, then you’re not doing AI. “I believe AI is about completely transforming workflows and the way that work is done.”

AI transformation is based on how deeply technology is integrated into existing business processes. IBM’s internal example of HR illustrates this change: instead of employees asking chatbots about HR information, specialized agent now handle routine questions about compensation, hiring and promotions. They automatically route to appropriate systems and escalate to humans only if necessary.

I used to spend a great deal of time talking with my HR partners about a variety of things. Ruiz explained that he now handles most of the work with an HR agent. “Depending on the questions, whether it’s about compensation, handling separations, hiring someone, or doing promotion, all of these things will connect to different HR internal systems and those will act as separate agents.” Instead of employees learning how to interact with AI tools the AI learns how to execute entire business processes from start to finish.

Enterprises need to move away from API integrations and prompting engineering to deep process instrumentation, which allows AI agents execute multi-step workflows independently.

IBM’s real-world AI deployment data suggests a few critical shifts in enterprise AI strategy.

Stop thinking about chatbots first: Organizations need to identify complete workflows rather than adding conversational features to existing systems. The goal is not to improve human-computer interactions, but to eliminate human steps.

Architect multi-model flexibility: Instead of committing to a single AI provider, enterprises need integration platform that allows switching between models depending on the use case requirements and maintains governance standards.

Investing in communication standards: Organizations should prioritize AI solutions that support emerging protocols such as MCP, ACP, A2A, rather than proprietary integration methods that create vendor lock-in.

Ruiz said, “There is a lot to build and I keep saying that everyone needs to learn AI. Business leaders should be AI first leaders who understand the concepts.”

VB Daily provides daily insights on business use-cases

Want to impress your boss? VB Daily can help. We provide you with the inside scoop about what companies are doing to leverage generative AI. From regulatory shifts and practical deployments, we give you the information you need to maximize your ROI.

Read our privacy policy

Thank you for subscribing. Click here to view more VB Newsletters.

An error occured.

www.aiobserver.co

Exit mobile version