Is OpenAI Betting $15.9B That CoreWeave Can Outscale Microsoft and Amazon?
The multi-cloud shift, the rise of the “NeoCloud,” and the high-wire act behind the biggest AI infrastructure bet since Azure
Welcome to Global Data Center Hub. Join 1700+ investors, operators, and innovators reading to stay ahead of the latest trends in the data center sector in developed and emerging markets globally.
In 2023, Sam Altman admitted it: OpenAI was running out of GPUs.
It wasn’t a minor issue it was a bottleneck. In a world where compute is the new oil, running out is like Exxon running out of barrels.
By 2025, OpenAI signed two major deals with CoreWeave worth $15.9B. Spanning five years, the contracts include hardware and equity reshaping not just OpenAI’s stack, but cloud power dynamics.
This wasn’t just diversification. It signaled the end of cloud centralization and the rise of custom, multi-cloud AI infrastructure.
From Crypto to Critical Infrastructure
Rewind five years and CoreWeave was running an Ethereum mining operation. No billion-dollar contracts. No NASDAQ ticker. No NVIDIA equity stake.
Today, it operates over 250,000 GPUs across 32 data centers, raised $1.5 billion in its March 2025 IPO, and holds infrastructure commitments from the world’s most compute-hungry lab.
This is the story of how a former miner became OpenAI’s biggest non-Microsoft infrastructure partner and possibly the first real threat to the cloud oligopoly.
But the story only makes sense when you look beneath the surface:
Microsoft’s exclusivity clause with OpenAI quietly expired in January 2025
GPU availability began tightening across the board
Inference and fine-tuning demand surged beyond forecast
Model architecture advances multimodal, reasoning, long-context started scaling faster than hyperscaler procurement cycles
That gap was CoreWeave’s opening.
This evolution received another major boost in May - read OpenAI’s $4B Expansion with CoreWeave to see how that deal doubled down on compute leverage and equity alignment.
The Shape of the Deal
The first contract, signed in March 2025, committed OpenAI to $11.9 billion in compute spend over five years. It included a take-or-pay clause meaning OpenAI must pay for reserved GPU capacity whether it’s used or not and $350 million in equity at IPO pricing.
Two months later, a second agreement added up to $4 billion in additional commitments, extending the relationship through April 2029. This expansion also secured OpenAI’s early access to NVIDIA’s Blackwell architecture a not-so-subtle hedge against hardware scarcity as GB200 chips began shipping in limited volumes.
Combined, the two contracts represent over 35 billion GPU hours purchased roughly equivalent to a sustained 2 gigawatt compute load. That’s not a vendor agreement. That’s an infrastructure backbone.
And OpenAI didn’t just buy compute.
It bought leverage.
Betting Against the Hyperscaler Stack
What OpenAI wants and what Microsoft alone can’t fully deliver is speed, predictability, and optionality.
Azure remains a key partner, but OpenAI’s infrastructure strategy is now multi-cloud by design and multi-vendor by necessity. Oracle Cloud, Google Cloud, and CoreWeave are central to its $500B Stargate initiative to rebuild the AI stack from the ground up with control across power, cooling, silicon, and orchestration.
CoreWeave plays a pivotal role:
Bare-metal clusters for AI workloads
Custom software stack for GPU efficiency
Tight NVIDIA ties for early chip access
1.3 GW of owned power via Core Scientific
This isn’t yesterday’s cloud.
It’s the infrastructure AI was waiting for.
The Risk Beneath the Backlog
The OpenAI contracts are transformational but they don’t solve CoreWeave’s biggest challenge: its balance sheet.
By Q1 2025, CoreWeave held over $12B in debt much of it vendor-financed or tied to high-yield instruments with rates above 12%. It burned $5B in Q1 alone, with 2025 capex projected at $20B–$23B. Credit agencies warn interest expenses could top $1B annually through 2027.
The contracts provide revenue visibility but also increase concentration risk:
Microsoft accounted for 62% of 2024 revenue
With OpenAI, over 70% of future backlog relies on just two clients
A delay, chip shortage, or power issue in ERCOT or PJM could shift the narrative from growth to overreach
That’s why the equity stake matters.
By tying OpenAI’s upside to CoreWeave’s market value, both firms are now aligned. In today’s AI arms race, trust and strategic alignment may prove more valuable than capacity alone.
For a deeper dive into how financing structures influence risk in high‑growth AI data center ventures, see Data Center Capital Strategy: Balancing Growth, Risk, and Returns in an AI‑Driven Market.
The NeoCloud Moment
What makes CoreWeave different isn’t just that it got the deal. It’s that the deal makes it impossible to ignore.
It validates a new category of cloud provider what some now call the “NeoCloud.”
These aren’t general-purpose platforms for SaaS workloads or enterprise IT.
They’re GPU-first, latency-aware, vertically integrated, and built to win the most compute-intensive, mission-critical workloads on Earth.
If hyperscalers were built to serve everyone, NeoClouds are built to serve the frontier:
The companies training trillion-parameter models
The labs fine-tuning agents
The inference networks running 24/7 across millions of users
And the CoreWeave OpenAI contracts have made one thing clear:
That future doesn’t just belong to the biggest cloud.
It belongs to the fastest-moving one.
Final Thought
This isn’t the end of the hyperscaler era.
But it is the beginning of a new power dynamic.
In AI infrastructure, optionality is strategy. Capacity is leverage. And the companies who can deliver both on time, at scale, and at cost will increasingly shape the direction of the entire industry.
CoreWeave has bought itself a front-row seat.
Now it has to survive long enough to claim it.