What Do the Smartest Tech Investors Know About AI That Everyone Else Is Missing?
Coatue’s 2025 EMW keynote didn’t focus on models, apps, or ChatGPT clones. It revealed the $2.2 trillion infrastructure shift quietly reshaping who will control the AI economy.
(from left to right: Phillippe Laffont, Brad Gerstner, Bill Gurley, Thomas Laffone)
Welcome to Global Data Center Hub. Join 1400+ investors, operators, and innovators reading to stay ahead of the latest trends in the data center sector in developed and emerging markets globally.
Philippe and Thomas Laffont don’t follow narratives.
They engineer the conditions that make them inevitable.
At the 2025 East Meets West (EMW) conference in Montecito, Coatue didn’t talk about prompt engineering or model tuning.
They revealed a multi-trillion-dollar reordering of the infrastructure stack behind AI and who will control it.
Their message was clear:
The future of AI isn’t just software. It’s power, hardware, real estate, and velocity.
Here are 20 of the most critical stats and insights from Coatue’s 2025 EMW keynote, the kind of signals that smart capital is already acting on.
AI Infra: The New Center of Gravity
1. $2.2 trillion in AI infrastructure spend projected by 2028
Up from $50 billion in 2023. Coatue sees the fastest investment acceleration in enterprise history across compute, energy, and hyperscale campuses.
2. 37% of that projected capital is allocated to compute silicon
The majority of that is earmarked for inference-optimized chips and custom accelerators, not general-purpose GPUs.
3. AI-native data centers now average 3.4x more CapEx per megawatt than traditional facilities
Coatue notes the rising cost comes from power resiliency, high-density racks, and advanced cooling, not land or square footage.
4. Several hyperscalers are targeting 1.5 GW+ campus footprints by 2027
Coatue predicts 5–7 global megacampus hubs will become control points for inference-scale compute.
5. 100kW rack designs are now included in 70% of new AI-first builds
These power densities are driving rapid innovation in modular cooling and substation integration.
6. Over 60% of projected data center load in the AI era will be attributed to continuous inference tasks
Training has peaked. Continuous, dynamic workloads, especially from autonomous systems, will dominate energy draw.
7. Total inference energy consumption per model is now outpacing training by a factor of 3–5x
Coatue sees this gap widening as agents shift from episodic to persistent modes of operation.
8. Up to 82% of real-time inference latency is now attributed to infra-layer friction
Not model complexity. Infra stack coordination (networking, memory movement, and edge-to-core routing) is the primary drag.
9. 15 of Coatue’s “Fantastic 40” companies operate at or below the infra layer
These firms focus on optimizing compute, grid integration, or orchestration, not LLM development.
10. AI workloads could require 6–9% of U.S. electricity generation by 2030
That is equivalent to the combined industrial demand of the top five U.S. manufacturing states today.
The Stack Is Being Rewritten
11. Over 50% of private AI infra investments are now integrating energy procurement into site selection
Coatue expects the data center site selection playbook to flip, starting with energy, not geography.
12. 14 of the 20 most promising infra startups in Coatue’s pipeline are vertically integrated
These firms combine silicon control, energy procurement, and custom rack designs to futureproof delivery.
13. Latency-aware edge zones are now being modeled in regional AI deployment strategy
Sub-20ms round-trip zones will act as geographic moats for AI agent performance and cost control.
14. AI infra velocity is becoming a differentiator in private markets
Firms able to build or retrofit sites within 6–9 months are commanding a 30–50% valuation premium.
15. xAI’s 200,000-GPU deployment was completed in under 7 months
A benchmark Coatue cited as “proof of how infra velocity compounds compute advantage.”
16. 59% of early-stage AI startups in Coatue’s 2025 funnel include custom hardware in their GTM
The move from API-first to silicon-aware is not a trend, it’s a defense mechanism.
17. Agentic AI systems are projected to demand 23x more compute-hours than prompt-response models
Autonomous agents loop persistently. Their infra requirements are not linear, they’re exponential.
18. Custom silicon yields a 2.7x long-term gross margin advantage for high-volume AI workloads
Coatue’s internal analysis shows silicon ownership is emerging as the most powerful cost control lever.
19. Firms scoring highest on Coatue’s Agentic Resilience Assessment Framework (ARAF) all operate with infra-level orchestration control
That includes workload scheduling, power management, and network redundancy, not just model choice.
20. Only 12% of public equity analysts currently factor energy constraints into AI growth models
Coatue sees this as the market’s biggest blind spot. Power, not parameters, is the scaling limit.
What This Means
Coatue isn’t making a thematic bet on AI.
They’re building a thesis around who gets to control it.
Their EMW 2025 message was direct:
Model quality will converge.
Infra control will compound.
The orchestration layer is the moat.
The winners in this cycle will not be those with the best prompts or APIs.
They’ll be those who own the power, the pipes, the latency zones, and the silicon that intelligence runs on.
If you’re still planning around software defensibility…
You’re already behind.
Want more briefings like this?
Subscribe to Global Data Center Hub to get exclusive breakdowns on capital flows, infra strategy, and AI investment frameworks.
Upgrade to paid for full access.