Qualcomm unveils AI data centre chips to crack the Inference market

The competition in AI chip manufacturing has intensified with Qualcomm stepping boldly into the AI data center processor arena. Known primarily for powering billions of smartphones globally, Qualcomm is now challenging industry giants like Nvidia, which has dominated this sector with staggering revenue growth driven by its AI-focused hardware.

On October 28, 2025, Qualcomm unveiled its new rack-scale AI inference systems, signaling a strategic pivot toward AI infrastructure. The market responded enthusiastically, with Qualcomm’s stock surging nearly 11%, reflecting investor optimism that even a modest foothold in the rapidly expanding AI data center market could significantly reshape the company’s future.

Qualcomm’s Dual-Chip Strategy: Diversifying AI Data Center Solutions

Qualcomm’s approach is notably multifaceted, introducing two distinct AI data center chip designs aimed at different segments and deployment timelines. This dual-pronged strategy reflects a nuanced understanding of the diverse demands within AI workloads.

AI200: Practical Entry into AI Inference

Scheduled for release in 2026, the AI200 is Qualcomm’s pragmatic entry point. This rack-scale system boasts an impressive 768 GB of LPDDR memory per card, catering to the substantial memory requirements of contemporary large language models and multimodal AI applications. Qualcomm’s emphasis on cost-effective memory solutions aims to reduce total cost of ownership (TCO) while maintaining competitive performance, appealing to enterprises seeking efficient AI deployment.

AI250: Pushing the Boundaries with Near-Memory Computing

Looking further ahead to 2027, the AI250 represents Qualcomm’s ambitious vision. Featuring a near-memory computing architecture, this chip promises to deliver over ten times the effective memory bandwidth of conventional designs. Since memory bandwidth often limits AI inference speed, this innovation could dramatically enhance responsiveness in applications like real-time conversational AI-provided Qualcomm can realize its technical goals.

Durga Malladi, Qualcomm’s SVP and GM of technology planning, edge solutions, and data center, emphasized, “With AI200 and AI250, we are setting new standards for rack-scale AI inference, enabling customers to deploy AI with unprecedented cost efficiency, flexibility, and security.”

Beyond Performance: The Economics of AI Infrastructure

In the fiercely competitive AI hardware market, raw computational power is only part of the equation. Data center operators prioritize operational expenses such as energy consumption, cooling, and hardware depreciation. Qualcomm’s AI systems consume 160 kW per rack and utilize direct liquid cooling to manage thermal loads effectively. The architecture supports PCIe for internal scaling and Ethernet for inter-rack connectivity, offering versatile deployment options from small-scale AI services to large-scale AI model hosting.

Security is integral to Qualcomm’s design, incorporating confidential computing features to safeguard proprietary AI models and sensitive data-an increasingly critical requirement for enterprise customers.

Strategic Partnership with Saudi Arabia’s Humain: A Major Market Validation

Qualcomm’s collaboration with Humain, a Saudi state-backed AI firm, underscores its serious commitment to AI infrastructure. Humain has pledged to deploy 200 megawatts of Qualcomm’s AI chips, a deal estimated by industry analysts to generate approximately $2 billion in revenue. While this figure is smaller compared to AMD’s $10 billion agreement with the same partner, it represents a significant endorsement for Qualcomm’s nascent AI data center ambitions.

CEO Cristiano Amon highlighted the partnership’s broader impact: “Together with Humain, we are laying the foundation for transformative AI innovation that will empower enterprises, governments, and communities both regionally and globally.” This alliance positions Qualcomm not just as a hardware supplier but as a strategic technology partner in emerging AI markets.

Enhancing Developer Experience with Robust Software Ecosystem

Qualcomm is complementing its hardware advances with a comprehensive software stack designed to streamline AI deployment. The Qualcomm AI software suite supports popular machine learning frameworks and enables “one-click deployment” of models from repositories like Hugging Face, reducing integration complexity.

Tools such as the Qualcomm AI Inference Suite and Efficient Transformers Library aim to accelerate enterprise adoption by minimizing the friction traditionally associated with deploying AI models at scale.

Facing Industry Titans: Qualcomm’s Long-Term Play in AI Chips

Qualcomm enters a market dominated by Nvidia, whose valuation has soared beyond $4.5 trillion, reflecting its entrenched position and extensive ecosystem. AMD, too, has solidified its presence, with its stock more than doubling in 2025 amid successful AI infrastructure initiatives.

Despite arriving later to the AI data center chip market, Qualcomm’s focus on inference optimization, energy efficiency, and cost-effectiveness offers a compelling alternative to the Nvidia-AMD duopoly. Analysts like Timothy Arcuri of UBS suggest that the rapid expansion of the AI market will create opportunities for multiple players, including Qualcomm, to thrive.

As the AI200 launch approaches in 2026, enterprises seeking diverse AI infrastructure options should watch Qualcomm’s evolving portfolio closely. Its emphasis on lowering TCO and enhancing security could make it a viable contender for organizations aiming to diversify their AI hardware suppliers.

More from this stream

Recomended