At SC25, Dell Technologies and NVIDIA unveiled significant enhancements to their collaborative AI platform, designed to simplify the deployment of diverse AI workloads-from legacy models to cutting-edge agent-based systems-for enterprises.
Addressing the Complexities of Scaling AI in Enterprises
As organisations accelerate their AI adoption, they frequently encounter challenges such as managing heterogeneous hardware and software environments, safeguarding sensitive data, and ensuring scalable infrastructure. Recent studies reveal that a majority of businesses prefer partnering with trusted vendors when integrating new technologies, especially valuing AI solutions that operate in close proximity to their data sources to enhance security and efficiency.
The Dell AI Factory with NVIDIA: A Unified Solution
Built on the principle of seamless integration, the Dell AI Factory with NVIDIA merges Dell’s comprehensive infrastructure stack with NVIDIA’s advanced AI software suite, all supported by Dell’s expert professional services. This synergy aims to streamline the journey from conceptual AI projects to tangible business outcomes while minimizing technical hurdles.
Accelerated AI Deployment via Integrated Systems
Dell is bolstering its storage and AI capabilities to facilitate automated setup, boost system performance, and enable consistent real-time AI processing. The ObjectScale and PowerScale storage engines, foundational to the Dell AI Data Platform, now integrate with NVIDIA’s NIXL library from the Dynamo framework. This collaboration supports scalable key-value cache storage and sharing, achieving an impressive one-second Time to First Token with a 131,000-token context window, all while optimizing GPU memory usage and reducing operational costs.
Additionally, the Dell AI Factory now supports the PowerEdge XE7740 and XE7745 servers equipped with NVIDIA RTX PRO 6000 Blackwell Server Edition and NVIDIA Hopper GPUs. These high-performance systems empower organisations to efficiently run expansive multimodal models, agent-driven workloads, training processes, and enterprise-level inference tasks.
To simplify deployment, Dell has introduced the Dell Automation Platform, which delivers pre-tuned, validated configurations through secure, repeatable setups. This platform provides teams with a clear roadmap for constructing AI workflows. Complementing this, software tools like the AI code assistant powered by Tabnine and the agentic AI platform developed with Cohere North automate workload transitions into production, enhancing scalability and operational control.
Expanding Hardware Options and Pilot Programs
Beyond data center infrastructure, Dell’s AI PC ecosystem now includes devices featuring NVIDIA RTX Blackwell and Ada GPUs, broadening hardware choices across Dell’s laptop and desktop lines. Dell Professional Services offers interactive pilot programs that leverage customers’ proprietary data to validate AI concepts before committing to large-scale investments. These pilots emphasize measurable outcomes, enabling businesses to assess AI’s impact with greater confidence.
Next-Generation Infrastructure for Enhanced AI and HPC Workloads
Dell is refreshing its infrastructure lineup to better accommodate complex AI and high-performance computing (HPC) demands, focusing on scalability, performance, and simplified management. Launching next month, the PowerEdge XE8712 supports up to 144 NVIDIA Blackwell GPUs within a standard rack, making rack-scale AI and HPC more accessible. This system benefits from unified monitoring and automation tools such as iDRAC, OpenManage Enterprise, and the Integrated Rack Controller.
Furthermore, Dell’s Enterprise SONiC Distribution now supports NVIDIA Spectrum-X platforms alongside NVIDIA’s Cumulus OS, enabling organisations to build open, standards-compliant AI networks interoperable across multiple vendors. The latest SmartFabric Manager update extends compatibility with Dell’s Enterprise SONiC on NVIDIA Spectrum-X, reducing deployment times and minimizing configuration errors through guided automation.
Broader AI Ecosystem and Flexible Deployment Options
With AI budgets and strategies evolving, organisations seek adaptable tools and platforms. Red Hat OpenShift for the Dell AI Factory with NVIDIA has expanded validation to additional Dell PowerEdge servers, including the R760xa and XE9680 models equipped with NVIDIA H100 and H200 Tensor Core GPUs. This integration combines Red Hat’s governance and control capabilities with Dell’s secure infrastructure, offering a robust foundation for scaling AI initiatives.
Dell leadership emphasizes that these updates are designed to help enterprises transition from pilot projects to full-scale AI deployments without the need to overhaul existing infrastructure. Jeff Clarke, Dell Technologies’ vice chairman and COO, highlighted that Dell has “done the integration work so customers don’t have to,” facilitating smoother and more confident AI scaling.
From NVIDIA’s perspective, this evolution marks a pivotal shift from AI experimentation to transformative enterprise adoption. Justin Boitano, VP of Enterprise AI Products at NVIDIA, described this phase as “redefining how businesses operate,” with Dell and NVIDIA’s unified platform combining infrastructure, automation, and data management tools to enable scalable AI deployment and measurable business impact.
Industry analysts echo this sentiment. Ashish Nadkarni, Group VP at IDC, noted the growing demand for AI-ready systems that balance power with ease of management. He praised the Dell and NVIDIA partnership as “a significant advancement in delivering enterprise-grade AI solutions.”