Sign up for our newsletters and get only the information that matters to enterprise leaders in AI, data, or security. Subscribe Now Deciding which AI models to use is both a technical and a strategic decision. But open, hybrid or closed models all come with trade-offs.
Speaking at this year’s VB Transform conference, model architecture experts from General Motors and Zoom, as well as IBM, discussed how their companies, and customers, consider AI model selection.
Barak Turovsky said that there is a lot of noise when a new model is released and when the leaderboard changes. Turovsky, who helped launch the first large-language model (LLM) long before leaderboards became a mainstream discussion, recalled how open-sourcing AI weights and training datasets led to major breakthroughs. Turovsky stated that this was probably one of the most important breakthroughs for OpenAI and other companies to launch. “It’s a funny story: Open-source helped create something that became closed and is now back to being open.” Turovsky said that enterprises sometimes prefer to use a mixed strategy – using an open model internally and a closed one for production and customer facing, or vice versa.
>>View all our Transform 2020 coverage here
IBM’s AI Strategy
Armand Ruiz is IBM’s Vice President of AI Platform. He said that IBM started its platform by using its own LLMs but realized this wouldn’t be sufficient, especially as more powerful models began to appear on the market. The company expanded to offer integrations for platforms like Hugging Face, so customers could choose any open-source model. (The company has recently launched a new model portal that gives enterprises an interface to switch between LLMs.
Enterprises are buying more models from different vendors. When Andreessen Horowitz In a survey of 100 CIOs 37% of respondents stated that they use 5 or more models. Only 29% of respondents used the same number last year. Ruiz:
Choice can be important, but too many options can cause confusion. IBM does not worry about the LLM used during the proof-of-concept or pilot phase. The main goal is to ensure that customers are able to make the right decision. Later, they will decide whether to customize a model based on the customer’s requirements or distill a current model.
Ruiz said, “First we simplify all the analysis paralysis and focus on the usage case.” “Then, we determine what is the best route for production.”
Zoom’s AI Companion
According to Zoom CTO Xuedong Huang, customers can choose from two configurations. One option involves federating Zoom’s LLM with larger foundation models. One configuration allows customers who are concerned about using too many different models to use Zoom’s model. (The company recently partnered up with Google Cloud for AI Companion enterprise workflows to adopt an agentto-agent protocol.)
According to Huang, the company created its own small-language model (SLM) by avoiding customer data. The LLM has 2 billion parameters and is therefore very small. However, it can still perform better than other industry-specific models. SLMs work best when used in conjunction with a larger model for complex tasks.
Huang said, “This is the real power of a hybrid strategy.” “Our philosophy is simple. Our company is very much like Mickey Mouse dancing with the elephant. The small model is designed to perform a specific task. We are not saying that a small-scale model will suffice…The Mickey Mouse will work with the elephant as a team.”
Want to impress your boss? VB Daily can help. We provide you with the inside scoop on what companies do with generative AI. From regulatory shifts to practical implementations, we give you the insights you need to maximize ROI.
Read our privacy policy
Thank you for subscribing. Click here to view more VB Newsletters.
An error occured.

