Read more about this special issue. Since the 1990s, companies have been able to choose between closed proprietary and open-source technologies.
Initially, the choice was primarily about operating systems. Linux offered an open-source alternative for Microsoft Windows. Open-source languages, such as Python and JavaScript, are dominant in the developer world, while open-source technologies like Kubernetes are standard in the cloud.
Enterprises are now faced with the same choice between open and close AI models, with multiple options. Some of the most widely used proprietary closed models in the world, such as those from OpenAI and Anthropic, are available on the proprietary closed model front. Open-source models include Meta’s Llama and IBM Granite. Alibaba’s Qwen, and DeepSeek are also available.
Enterprise AI decision makers will need to make a choice between open and closed models in 2025. Enterprises need to consider and understand the financial and customization implications of both options.
Understanding the difference between closed and open licenses
The decades-old rivalry of open and close licenses is not without hyperbole. What does this mean for enterprise users?
Closed-source proprietary technologies, such as OpenAI’s GPT 4o, do not make their model code, training data or model weights available to the public. The model cannot be easily fine-tuned, and it is generally only available with a price for real enterprise use (sure, ChatGPT offers a free version but that won’t cut it for real enterprise workloads).
Open technologies, such as Meta Llama or IBM Granite, have openly available code. Enterprises can use these models without restrictions and fine-tuning, including customizations.
Rohan, a principal at Deloitte () told VentureBeat the debate between open and closed source is not unique to AI. It is also unlikely to be resolved any time soon.
Gupta explained closed source providers offer several wrappers that allow for ease of use, simpler scaling, more seamless upgrade and downgrades, and a constant stream of enhancements. They also offer significant developer support. This includes documentation and hands-on guidance, and often results in tighter integrations between infrastructure and applications. These services are expensive and an enterprise must pay a premium to use them.
Gupta stated that “Open-source models can provide greater flexibility, control and customization options and are supported by an enthusiastic, vibrant developer ecosystem.” “These models are increasingly available via fully managed APIs from cloud vendors, thereby broadening their distribution.”
Making the decision between an open and closed AI model for enterprise
Many enterprise users may ask: Which is better, an open or closed model? The answer is not always one or the other.
David Guarrera Generative AI Lead at EY Americastold VentureBeat. “Open vs. closed is increasingly fluid design space where models are selected or even automatically orchestrated based on tradeoffs of accuracy, latency and cost, interpretability, and security at different point in a workflow.” Model vendors who sell proprietary models often limit fine-tuning or charge premium rates. They may also hide the process behind black boxes. API-based tools may simplify integration but they abstract away a lot of control, making it difficult to build highly specific and interpretable systems. Open-source models, on the other hand, allow for fine-tuning and guardrail design, as well as optimization for specific use-cases. This is especially important in an agentic world, where models will no longer be monolithic tools but rather interchangeable components in dynamic workflows. When deploying task-specific solutions or tightly regulated agents, the ability to finely tailor model behavior at low cost with full transparency becomes a major advantage.
Guarrera stated that “in practice, we envision an agentic future in which model selection is abstracted from”.
A user could, for example, draft an email using one AI tool, summarise legal documents with another, search enterprise files with a finely-tuned, open-source model, and interact with AI local through an on-device LLM – all without knowing which model was doing what.
Guarrera said, “The real question is: What mix of models will best suit your workflow’s requirements?”
Consider the total cost of ownership.
The basic idea behind open models is that they are freely available to use. Contrarily, enterprises pay for closed models.
When it comes to total cost of ownership, the reality is nuanced.
Praveen Akkiraju, Managing Director at””https://www.insightpartners.com/” ” rel=””noreferrer noopener”” target=””_blank” “> VentureBeat was told by Insight Partners that TCO is a multi-layered concept. Among the key considerations are infrastructure hosting costs and engineering. Are the open-source model hosted by the enterprise, or by the cloud provider? How much engineering is required to operationalize a model safely, including fine tuning, guard railing, and security testing?
Akkiraju pointed out that fine tuning an open weights can be a complex task. Closed frontier companies invest enormous engineering efforts to ensure performance in multiple tasks. According to him, enterprises will have to balance a lot of factors when fine-tuning their open source models if they don’t deploy the same engineering expertise. This has cost implications for organizations when they choose their model deployment strategies. For example, enterprises could fine-tune different model versions for various tasks or use a single API for several tasks.
Ryan Gross is the Head of Data & Applications for cloud native services provider Caylent said to VentureBeat that, from his perspective, the licensing terms are not important, except in extreme scenarios. When data residency requirements are in effect, the biggest restrictions are often related to model availability. In this case, using an open model with infrastructure like Amazon SageMaker could be the only option to get a model that is up-to-date and still compliant. Gross said that when it comes to TCO the tradeoff is between token costs and hosting and support costs. Gross stated that there is a clear breakeven point when the economics shift from closed models to open models.
According to Gross, closed models with hosting and scaling handled by the organization will have a lower TCO. Hosting distilled open models is more cost-effective for large enterprises or SaaS companies that have high demands on their LLMs but require frontier performance. Also, AI-centric product companies and large enterprises with high LLM demand can benefit from this.
How one enterprise software developer compared open and closed models
Josh Bosquez CTO at Second Front Systems was one of many companies that had to evaluate and compare open vs. closed models. Bosquez, speaking to VentureBeat, said that “we use both open and close AI models depending on the specific use cases, security requirements, and strategic objectives.”
Bosquez explained how open models enable his firm to integrate cutting edge capabilities without having to spend time or money training models from the ground up. Open models allow his firm to quickly iterate and benefit from community-driven advances for internal experimentation or rapid prototypes. “Closed models are our choice, however, when data sovereignty, enterprise support, and security guarantees are important, especially for customer-facing apps or deployments that involve sensitive or regulated environment,” he said. These models are often from trusted vendors who offer strong performance and compliance support as well as self-hosting options.
Bosquez explained that the model selection is cross-functional, risk-informed and evaluates not only technical fit, but also data handling policy, integration requirements, and long-term scaleability.
He said that the TCO varies greatly between open and closed model approaches and neither is cheaper. Bosquez explained that it depends on the scope of deployment and organizational maturity. “We don’t just look at dollars spent. We also consider delivery speed, compliance risks, and the ability of the system to scale securely.”
What this means for enterprise AI strategies
The open vs. the closed debate doesn’t mean picking sides. It’s all about creating a portfolio strategy that optimizes different use cases in your organization.
There are a few simple actions to take immediately. First, you should audit your current AI workloads, and map them to the decision framework presented by the experts. You should consider accuracy requirements, latency requirements, cost constraints and compliance obligations, for each use case. Second, assess your organization’s engineering capability for model fine tuning, hosting and maintenance. This directly impacts your total cost of ownership.
Thirdly, start experimenting with platform orchestration that can automatically route tasks towards the most appropriate model – whether it’s open or closed. This will prepare your organization for an agentic future, as predicted by industry leaders such as EY’s Guarrera. In this future, model selection will be invisible to the end-user.

