Cerebras Systems, the AI chipmaker whose first IPO attempt stalled in late 2024 amid a US national security review of its relationship with Abu Dhabi's G42, filed a fresh S-1 with the SEC on 17 April targeting a Nasdaq listing at approximately $23 billion.
Morgan Stanley, Citigroup, Barclays and UBS are underwriting the offering, which is targeting a mid-May listing window.
The filing shows $510 million in revenue for 2025, up 76% from $290.3 million in 2024, and GAAP net income of $87.9 million after a net loss of $484.8 million the prior year. But the swing to reported profitability is driven almost entirely by a $363 million one-off accounting gain from the restructuring of a G42 forward contract liability, not from operating performance. Strip that out and the company posted a non-GAAP net loss of $75.7 million, widening from $21.8 million in 2024.
The regulatory overhang that killed the first IPO attempt has been resolved. CFIUS granted clearance in March 2025 after Cerebras restructured G42's equity stake to non-voting shares, effectively removing the Abu Dhabi group from corporate governance.
The clearance came just two weeks after Sheikh Tahnoon bin Zayed Al Nahyan, G42's chairman, the UAE's national security advisor and brother to the president, met US Treasury Secretary Scott Bessent in Washington to discuss expanding UAE access to advanced American semiconductors.
G42 no longer appears on Cerebras's investor list in the new S-1.
But the customer concentration problem that spooked investors in 2024 has not been solved. It has migrated. G42 accounted for 87% of Cerebras's revenue in the first half of 2024, a figure that dropped to 24% for full-year 2025.
The gap was filled not by US enterprise customers or hyperscalers but by MBZUAI, the Mohamed bin Zayed University of Artificial Intelligence in Abu Dhabi, which accounted for 62% of 2025 revenue and 78% of outstanding accounts receivable at year-end.
Between them, two Abu Dhabi entities still accounted for 86% of Cerebras's total revenue in 2025. Revenue from US-billed customers actually declined 34% year-on-year, from $282.7 million to $187.6 million.
However, the S-1 does lay out a path to diversification. Cerebras signed a multi-year agreement with OpenAI in December 2025 to deploy 750 megawatts of inference compute capacity, a deal valued at more than $20 billion that represents the bulk of the company's projected revenue over the coming years. The arrangement includes a $1 billion working capital loan from OpenAI to Cerebras and warrants allowing OpenAI to purchase Cerebras stock at favourable prices.
A binding term sheet with AWS, signed in March 2026, sets out pricing, exclusivity and minimum capacity commitments, though final agreements have not been completed. If both partnerships reach full scale, Cerebras's revenue base would shift dramatically from Gulf sovereign AI infrastructure toward US hyperscaler demand.
Axios reported the company is targeting a $35 billion valuation at listing, and CEO Andrew Feldman and CTO Sean Lie are set for additional share payouts should the company reach average valuations of $75 billion, $150 billion and $250 billion within nine years.
Cerebras has raised approximately $2.8 billion across its history. The Gulf connection dates to November 2021, when the company raised $250 million in a Series F led by Alpha Wave Ventures, alongside Abu Dhabi Growth Fund and G42, at a $4 billion valuation.
G42 later committed to purchase $335 million worth of Cerebras stock, the transaction that triggered the CFIUS review. In October 2025, Cerebras raised $1.1 billion in a Series G at $8.1 billion led by Fidelity and Atreides Management, and in February 2026, $1 billion in a Series H at $23 billion led by Tiger Global, with participation from AMD, Benchmark, Coatue and Fidelity.
Its wafer-scale engine, the WSE-3, is fabricated on TSMC's 5nm process and at 46,225 square millimetres is 57 times larger than Nvidia's H100, designed to compete with Nvidia's near-total dominance of the AI chip market by offering superior memory bandwidth for inference workloads.




