Meta and Oracle Enhance AI Data Centers with NVIDIA’s Advanced Spectrum-X Networking
Meta and Oracle are revolutionizing their AI infrastructure by integrating NVIDIA’s cutting-edge Spectrum-X Ethernet switches. This technology is engineered to meet the escalating demands of expansive AI systems, enabling more efficient training and faster deployment across vast compute clusters. Both companies are leveraging Spectrum-X within an open networking framework to optimize AI workloads at scale.
Transforming Data Centers into AI Powerhouses
Jensen Huang, CEO of NVIDIA, describes the rise of trillion-parameter AI models as a catalyst turning data centers into “giga-scale AI factories.” He emphasizes that Spectrum-X functions as the critical “nervous system” that interconnects millions of GPUs, facilitating the training of some of the largest AI models ever developed.
Oracle’s Vision: Building Large-Scale AI Factories with Vera Rubin
Oracle is set to deploy Spectrum-X Ethernet switches alongside its Vera Rubin architecture to construct expansive AI factories. Mahesh Thiagarajan, Executive Vice President of Oracle Cloud Infrastructure, highlights that this integration will enable the company to link millions of GPUs more effectively, accelerating AI model training and deployment for its customers.
Meta’s Expansion of AI Infrastructure via FBOSS Integration
Meta is scaling its AI capabilities by embedding Spectrum-X Ethernet switches into its proprietary Facebook Open Switching System (FBOSS). Gaya Nagarajan, Meta’s Vice President of Networking Engineering, stresses the necessity for an open and efficient next-generation network to support increasingly large AI models and deliver seamless services to billions worldwide.
Designing Adaptable and Energy-Efficient AI Systems
Modular Architecture for Enhanced Flexibility
Joe DeLaere, head of NVIDIA’s Accelerated Computing Solution Portfolio for Data Centers, underscores the importance of flexibility amid growing data center complexity. NVIDIA’s MGX system offers a modular, building-block approach, allowing partners to mix and match CPUs, GPUs, storage, and networking components tailored to specific needs.
This design promotes interoperability across hardware generations, providing faster deployment cycles and future-proofing infrastructure investments.
Addressing Power Efficiency Challenges
As AI models scale, power consumption becomes a critical concern. NVIDIA is tackling this “from chip to grid” by collaborating with power and cooling specialists to enhance energy efficiency. Innovations include transitioning to 800-volt DC power delivery, which minimizes heat loss and boosts efficiency, and implementing power-smoothing technologies that can reduce peak power demands by up to 30%, enabling greater compute density within existing data center footprints.
Scaling AI Infrastructure: Up, Out, and Across
MGX System Enables Versatile Scaling
Gilad Shainer, NVIDIA’s Senior Vice President of Networking, explains that MGX racks integrate both compute and switching elements, supporting NVLink for vertical scaling and Spectrum-X Ethernet for horizontal expansion. This architecture allows multiple AI data centers to interconnect seamlessly, forming unified systems capable of handling massive distributed AI training workloads.
Open Networking for Diverse Environments
Meta’s adoption of Spectrum-X highlights the growing trend toward open networking. While FBOSS serves as Meta’s network operating system, Spectrum-X is compatible with other platforms such as Cumulus, SONiC, and Cisco’s NOS. This versatility empowers hyperscalers and enterprises to standardize infrastructure using solutions best suited to their operational needs.
Broadening the AI Ecosystem with Spectrum-X
NVIDIA envisions Spectrum-X as a catalyst for making AI infrastructure more efficient and accessible across various scales. The platform is specifically optimized for AI workloads, delivering up to 95% effective bandwidth-significantly outperforming traditional Ethernet, which typically achieves around 60% due to flow collisions.
Collaborations with industry leaders like Cisco, xAI, Meta, and Oracle Cloud Infrastructure are expanding Spectrum-X’s reach, enabling its deployment from hyperscale data centers to enterprise environments.
Looking Ahead: Vera Rubin Architecture and Future Innovations
DeLaere reveals that NVIDIA’s Vera Rubin architecture is slated for commercial release in the latter half of 2026, with the Rubin CPX product expected by year-end. These advancements will integrate seamlessly with Spectrum-X networking and MGX systems to support next-generation AI factories.
He clarifies that while Spectrum-X and XGS share core hardware, they employ distinct algorithms tailored for different distances-Spectrum-X for intra-data center communication and XGS for inter-data center links-minimizing latency and enabling multiple sites to operate as a cohesive AI supercomputer.
Collaborative Efforts Across the Power Supply Chain
To facilitate the transition to 800-volt DC power, NVIDIA is partnering with key players across the power ecosystem. Collaborations include working with Onsemi and Infineon on power components, Delta, Flex, and Lite-On at the rack level, and Schneider Electric and Siemens on data center design. A comprehensive technical white paper detailing this integrated approach will be unveiled at the upcoming OCP Summit.
DeLaere describes this as a “holistic design from silicon to power delivery,” ensuring seamless operation in high-density AI environments like those operated by Meta and Oracle.
Performance Benefits Tailored for Hyperscalers
Spectrum-X Ethernet is engineered specifically for distributed AI workloads, featuring adaptive routing and telemetry-driven congestion control to eliminate network bottlenecks and maintain consistent performance. These capabilities enable faster AI training and inference while supporting concurrent workloads without interference.
Shainer emphasizes that Spectrum-X is the only Ethernet technology proven to scale to extreme levels, maximizing GPU investment returns. For hyperscalers such as Meta, this scalability is crucial to managing surging AI training demands efficiently.
Synergizing Hardware and Software for Optimal AI Performance
Beyond hardware innovation, NVIDIA prioritizes software optimization through co-design strategies that align hardware and software development. Investments in FP4 kernels, frameworks like Dynamo and TensorRT-LLM, and advanced algorithms such as speculative decoding enhance throughput and AI model efficiency.
These continuous improvements ensure platforms like Blackwell deliver sustained performance gains, meeting the rigorous demands of hyperscalers reliant on consistent AI capabilities.
Networking Solutions for the Trillion-Parameter AI Era
The Spectrum-X platform, encompassing Ethernet switches and SuperNICs, represents NVIDIA’s inaugural Ethernet system purpose-built for AI workloads. It efficiently connects millions of GPUs while maintaining predictable, high-performance networking across AI data centers.
With congestion control technologies achieving up to 95% data throughput, Spectrum-X significantly surpasses traditional Ethernet performance. Its XGS technology extends connectivity across long distances, linking multiple data centers into unified “AI super factories.”
By integrating NVIDIA’s comprehensive stack-including GPUs, CPUs, NVLink, and software-Spectrum-X delivers the consistent, scalable performance essential for training trillion-parameter models and powering the next generation of generative AI applications.

