AI-chip startup Cerebras Systems claims it has cleared a major hurdle ahead of its planned Initial Public Offering (IPO), claiming that it has resolved concerns about its funding sources with the US Committee on Foreign Investments (CFIUS).
The watchdog panel was most concerned about Cerebras’ dependence on the United Arab Emirates’ (UAE’s) AI crown jewel G42. According to the upstart’s IPO filings from September G42 accounted for over 87 percent in the first half 2024. Exports of AI systems and accelerators to Middle Eastern countries are tightly controlled as of mid-2023. US chip designers must obtain special licenses for their wares to be sold in the region. G42, in order to avoid export restrictions, funded several AI supercomputers based Cerebras’ wafer scale CS2 and CS3 design in the US for a total of approximately $900 million. The intention was to access them remotely. G42 planned to purchase more than 22,000,000 shares of Cerebras in the IPO. CFIUS was apparently not pleased. As we reported previously, G42 was under scrutiny by US intelligence agencies. They reportedly feared that the UAE-based company could be providing China advanced technologies and access genetic data on millions people. G42 cut ties to some suppliers, including Huawei, to appease US export czars and remove roadblocks for collaborations with Cerebrasand Microsoft. Cerebras claimed that it had addressed all CFIUS concerns when it amended its agreement with G42. This limited the UAE-based tech company to non-voting shareholdings. This made a further review unnecessary. Bloomberg reported on Monday. Cerebras
in an email to El Reg claimed that it had resolved CFIUS objections and made no further comments. The US Treasury Department declined to comment.
Over on LinkedIn, Andrew Feldman, CEO of California-headquartered Cerebras, gushed: “I am pleased to announce that working closely with our strategic partner G42, Cerebras Systems has reached a positive resolution with CFIUS and is moving forward.”
The top exec also thanked the US administration and the rulers of UAE for their support of US-based artificial intelligence companies. This news comes less than a fortnight after Reuters reported that Cerebras had delayed its stock-market debut while it waited to be appointed by key Trump administration officials. Bloomberg reported that the terms of Cerebras planned IPO are not yet known, but they are expected to raise up to $1 billion with a valuation between $7 and $8 billion.
- Nvidia won the AI training race, but inference is still anyone’s game
- Cerebras to light up datacenters in North America and France packed with AI accelerators
- We heard you like HBM – Nvidia’s Blackwell Ultra GPUs will have 288 GB of it
- LLM providers on the cusp of an ‘extinction’ phase as capex realities bite
While G42 remains one of Cerebras Systems’ largest customers, the chip startup clearly aims to diversify its customer base with a high-performance inference-as-a-service platform backed by a massive infrastructure build out across America, Canada, and France, announced earlier this month. The startup has pledged to deploy over 1,000 of its wafer scale accelerators across six datacenters by 2025.
While that might not sound like all that many accelerators next to the hundred-thousand-plus GPU installations xAI and others are building, it’s worth noting Cerebras’s dinner plate-sized chip dies are supposed to each achieve up to 125 petaFLOPS at FP16, about 62x that of the Nvidia H100 series used to build the Colossus supercomputer in Memphis, Tennessee.
Cerebras will operate the majority of its new sites in partnership with G42. This means that they get first dibs when it comes to capacity. It will, however, retain full control over its Oklahoma City, Canada and Montreal, Canada locations. Cerebras, along with model training, is pushing high-throughput reasoning models, like DeepSeek’s R1 distillationfor Llama3.370B, as a way to differentiate itself. The chip designer claims that its systems can serve models of this size up to 1,508 tokens a second, which is far faster than conventional GPU providers. (r)