Is Meta’s $600B U.S. Data Center Bet the New Benchmark for AI Infrastructure Scale?
A $600B buildout that resets the floor for power, land, and compute in the AI race
Welcome to Global Data Center Hub. Join investors, operators, and innovators reading to stay ahead of the latest trends in the data center sector in developed and emerging markets globally.
Meta’s plan to spend $600 billion on U.S. infrastructure by 2028 resets the scale assumptions for the AI era.
The number is so large it stops resembling corporate capex and starts looking like national development spending.
It forces a new baseline question.
What is the minimum amount of electrified, GPU-dense, physically grounded infrastructure required to compete in frontier AI?
This analysis treats the $600 billion program as capital structure rather than headline.
The focus is what this level of spend means for power markets, land strategy, campus design, depreciation risk, and the competitive map hyperscalers are drawing across the country.
Meta’s Shift to Infrastructure Economics
Capex moved from roughly $28 billion in 2023 to $39 billion in 2024, to more than $70 billion in 2025. Several banks now model close to $100 billion for 2026. Meta is rebuilding its economics around fixed assets rather than software.
Earnings power now rests on asset life and depreciation curves. Free cash flow tightens as long as hardware cycles dominate spending. The competitive moat shifts toward land, power, cooling, and interconnects. Once a company commits at this scale, it begins to behave more like a utility than a software platform.
Multi-Gigawatt Superclusters as the New Baseline
Meta’s $600 billion build program is anchored in superclusters, not incremental campuses.
Prometheus in Ohio targets 1GW by 2026. Hyperion in Louisiana begins at 2GW and is designed to reach five, making it one of the largest corporate loads in the country.
El Paso and Beaver Dam are engineered for the same outcome: large land positions, phased builds, and gigawatt-class density as utilities expand transmission.
This resets the competitive field. Sovereigns, infrastructure funds, and utilities are now the closest peers.
Power procurement and transmission strategy become core underwriting variables. MTDC operators must either chase smaller hyperscale modules or position as neutral hosts in markets Meta saturates. The scale baseline has shifted from 100MW to 1-5 GW. Any serious long-term model must begin at this level.
GPU Scale Makes Depreciation the Core Risk
Meta already operates roughly 600,000 H100-class GPUs and plans to purchase more than 1.3 million accelerators per year.
At $25,000 to $30,000 dollars per chip, these cycles reach tens of billions annually. The real risk is not cost but obsolescence cadence. Frontier chips now lose competitiveness in quarters, not years.
Shortened asset lives pull depreciation into the core of the model.
The economic questions shift to how short useful lives can become before the model breaks, how much stranded capacity emerges as thermal and networking requirements change, and how many refresh cycles a balance sheet can absorb. Until AI monetization increases, the GPU fleet behaves like a negative-carry asset that must be serviced.
Power as the Defining Constraint
Meta combines renewable PPAs, firm-capacity generation, and transmission upgrades sized for gigawatt loads.
The company has already influenced roughly 15GW of new clean energy in the United States. A 2-5GW facility such as Hyperion forces utilities to redesign planning around a single customer. Generation, transmission, and rate-base decisions shift accordingly.
Regulators now confront questions about how much grid capacity a single tenant can consume, how ratepayers are protected when utilities build new generation for AI loads, and what limits should apply to land, water, and emissions.
AI infrastructure is outgrowing traditional economic-development frameworks.
Uneven Community Economics
Construction spending, subcontractor pipelines, and long-term economic output are material. Independent studies show multi-billion-dollar regional impact. But incentives reshape fiscal flow. Many states waive sales tax on equipment and reduce property tax burdens. Utilities recover capex through regulated tariffs. Local jurisdictions often wait years before capturing full tax revenue.
This raises new questions about true payback periods, the share of supply-chain value that remains local, and the protections communities should require before committing land, water, and grid capacity.
A New Benchmark for Hyperscalers
Microsoft, Amazon, and Google are all raising capex to meet AI demand. Meta’s $600 billion commitment establishes a new denominator for material investment. First movers win by locking power and land. GPU procurement becomes a strategic advantage. Sovereign funds and infrastructure GPs enter partnerships earlier. Neutral MTDCs position around density, latency zones, and markets hyperscalers cannot serve quickly.
The industry has entered multi-gigawatt thinking. This is the new baseline.
Strategic takeaways
The $600 billion number represents what one company believes is necessary to compete at the frontier of AI.
It becomes the reference point for all others. Investors should model Meta as a hybrid between software platform and infrastructure operator.
Operators must choose whether to partner, compete, or specialize in density slices hyperscalers ignore. Policymakers must update land, power, water, and tax frameworks for private-sector gigawatt builds.
Meta is rebuilding itself around fixed assets at sovereign scale. Whether peers match it or not, this is now the benchmark every serious player must measure against.

