MCP and the Innovation Paradox: Why open standards can save AI from itself.

Venturebeat/ideogram

Join our daily and weekday newsletters to receive the latest updates on AI and exclusive content. Learn More


The next wave of AI innovation isn’t driven by bigger models. The real disruption is quieter than you think: standardization.

Launched in November 2024 by Anthropic, the Model Context Protocol standardizes the way AI applications interact with world beyond their training data. MCP is similar to HTTP and REST, which standardize how web applications can connect to services.

It’s likely that you’ve read a dozen articles describing what MCP is. What most people miss is the boring – and powerful – part: MCP’s a standard. Standards are more than just a way to organize technology. They also create growth flywheels. Early adoption will allow you to ride the wave. Ignore them and you will fall behind. This article explains the importance of MCP, what challenges it brings, and how its already reshaping ecosystem.

How MCP takes us from chaos into context

Meet Lily. She is a product manager for a cloud infrastructure firm. She juggles a half-dozen tools, including Jira, Figma GitHub, Slack Gmail, and Confluence. She’s drowning, like many others.

Lily realized that large language models were a great way to synthesize information by 2024. She saw an opportunity: if she could feed the tools of her team into a model she could automate updates and draft communications, as well as answer questions on demand. Each model had a unique way to connect with services. Each integration pushed her deeper into the platform of a single vendor. It was difficult to switch to another LLM when she needed to pull transcripts from Gong.

Anthropic then launched MCP, an open protocol to standardize how context flows into LLMs. MCP quickly gained support from Openai AWS Azure, Microsoft Copilot Studio is available and soon Google. Official SDKs for Python TypeScript Java, C#, Rust Kotlin ( ) and SwiftCommunity SDKs for Go – and others will follow. Adoption was rapid.

Lily runs all her apps through Claude today, connected via a MCP server to her work apps. Status reports are automatically drafted. One prompt is all it takes to update your leadership. She can switch models as new models are developed without losing her integrations. When she writes code in Cursor, she uses the same OpenAI model and MCP server that she uses for Claude. Her IDE understands what she’s building. MCP made it easy.

The power and implications a standard

Lily’s story reveals a simple truth. Nobody likes fragmented tools. No user wants to be locked into a vendor. No company wants to rewrite the integrations each time they change their models. You want to be able to use the best tool. MCP delivers.

With standards, come implications.

Firstly, SaaS providers who lack strong public APIs will be vulnerable to obsolescence. MCP tools rely on these APIs and customers will require support for their AI applications. There are no excuses now that a de facto standards is emerging. Second, AI application development is about to accelerate dramatically. Developers don’t have to write custom code in order to test simple AI apps. Instead, they can use MCP servers to integrate with MCP clients such as Claude Desktop and Cursor. Third, switching costs have been falling. Integrations are decoupled with specific models so organizations can migrate from Claude, OpenAI or Gemini without having to rebuild infrastructure. Future LLM providers can benefit from the existing ecosystem surrounding MCP. This will allow them to focus on improving price performance.

Navigating challenges with the MCP

Each standard introduces new friction or leaves existing friction unsolved. MCP is not an exception. Trust is crucial: Dozens MCP registries are now available, offering thousands community-maintained MCP servers. If you don’t trust the party who controls the server, or if you do not control it yourself, you run the risk of leaking your secrets to a third party. If you are a SaaS provider, make sure to provide official servers. If you’re a developer, seek official servers.

The quality is variable: APIs are constantly evolving, and poorly maintained servers of MCP can easily become out-of-synch. LLMs depend on high-quality meta data to determine the tools to use. There is no authoritative MCP registry yet, which reinforces the need for servers from trusted third parties. If you are a SaaS provider, make sure to maintain your servers over time as your APIs change. If you’re a developer, seek official servers.

Large MCP servers increase costs, and lower utility.Bundling too many models into a single server increases cost through token consumption. LLMs can easily become confused if they are given access to too many tools. It’s the worst possible scenario. Smaller servers that are task-focused will be crucial. As you build and distribute your servers, keep this in mind.

Identity and Authorization challenges persist: These problems existed even before MCP and they still do with MCP. Imagine Lily giving Claude the ability send emails and giving well-intentioned directions such as “Quickly email Chris a status report.” Instead of sending an email to Chris, her boss, the LLM sends the message to everyone named Chris on her contact list. Humans must be kept in the loop when making high-judgment decisions.

Looking ahead

The MCP is not a hype — it’s an infrastructure shift for AI applications.

Like every other standard that has been widely adopted, MCP creates a self-reinforcing wheel: Each new server, each new integration, and every new application compound the momentum.

New platforms, tools and registries have already been developed to simplify the building, testing, deployment and discovery of MCP servers. As the ecosystem develops, AI applications offer simple interfaces for plugging into new capabilities. Teams that embrace the protocol ship products faster and have better integration stories. Integration stories can include companies that offer public APIs or official MCP servers. Late adopters may have to fight their way back into relevance. Noah Schwartz, head of product at is

Postman

VB Daily provides daily insights on business use-cases

Want to impress your boss? VB Daily can help. We provide you with the inside scoop about what companies are doing to leverage generative AI. From regulatory shifts and practical deployments, we give you the information you need to maximize your ROI.

Read our privacy policy

Thank you for subscribing. Click here to view more VB Newsletters.

An error occured.


www.aiobserver.co

More from this stream

Recomended