A big AI build is’stalled,’ and won’t be happening this year due to the lack of funds and GPUs

Big Tech plans to spend hundreds billions of dollars on infrastructure by 2025 are often viewed as demonstrating a near-endless need for artificial intelligence. However, networking vendor Arista just revealed that a large AI project has “stalled”due to a lack of funding and hardware.

Around this time last year Arista announced it had won five ‘large’ deals to build networks that would support AI infrastructure. Four of these deals were with Ethernet products, and one was using InfiniBand. CEO Jayshree Ullal said in May that the four Ethernet contracts were moving from pilots to production. She added: “We expect to produce in the range 10,000 to 100,000 GPUs by 2025.” The chief executive said “they may get back on their feet,” about the fifth deal. “In 2025, they’re awaiting new GPUs, and they’ve got some challenges on power cooling, etc. We’re a little bit stalled, which may be why we’re being careful about predicting how they’ll do. They may step in nicely in the second half of 2025.”

That customer did not get the GPUs they needed.

During Arista’s earnings conference call with Wall Street on Tuesday this week to discuss its fourth-quarter of 2024, Ullal revealed the troubled client, whom she described as “not cloud titan,” is “a little bit stuck.”

I hope they come back next year. But we won’t be talking about them this year.

I think they are waiting for new GPUs, and funding. I hope they come back next year. We won’t be talking about them this year.

It could be that it couldn’t get ahead of the line to buy (say) Nvidia accelerations, so its backers withdrew their money until GPUs became available.

Whatever happened, this is an unusual example of a AI bust in the midst of the current boom. We are waiting for signs of a larger malaise.

  • Arista provides 800G switches for the data-hungry gods of cloud computing
  • When we can’t build bigger AI datacenters any more, what happens? No-Nvidias club meets to find open GPU interconnect. Arista switches with AMD Xilinx based FPGAs are designed for high-frequency traders.

    The Next Platform covered them in depth here but the headline figures were: 6.6 percent quarterly growth to $1.93 Billion, and 25.3 percent year-on year growth from the fourth quarter 2023. Annual revenue growth was 19.5 percent, reaching $7 Billion, and full year net profit rose 37 percent to 2.85 Billion.

    Microsoft accounts for 20 percent of the business and Meta 14.6 percent, but we’re told that these are only two of over 10,000 clients who have installed 100 million ports cumulatively throughout Arista’s history. Many customers are switching to 400 and 800 gigabit Ethernet. This is a welcome move towards newer technology. John McCool, senior veep and Chief Platform Officer at the company, said that the company has already taken steps to reduce its dependency on China. Therefore, tariffs shouldn’t pose a problem. Ullal expressed her satisfaction with recent AI-related revenues and said that networkers such as Arista would soon have more opportunities.

    If you look at the way we have traditionally approached GPUs and libraries, we’ve largely viewed it as two different building blocks. She said that the vendor provides the GPU and we provide the scale-out network.

    When you look at Stargate, and projects like these, I think you’ll see more vertical rack integration where the processor, scale-up, scale-out, all of the software, to provide a unified point of control and transparency, starts to come together more and more.

    According to the CEO, this won’t happen this year. She said that in ’26 or ’27 you will see a whole new class AI accelerators, which are completely different from the current pluggable Lego version. This is good news for Arista, because the new accelerators are going to create more traffic. This means that 1.6T Ethernet will be needed.

    Perhaps even with that “stalled customer”. (r)

www.aiobserver.co

More from this stream

Recomended