The Hunt for Compute: Why Watts and Wires Now Decide the Future of AI
The next constraint on AI isn’t silicon, it’s the speed of power and network interconnection.
Welcome to Global Data Center Hub. Join investors, operators, and innovators reading to stay ahead of the latest trends in the data center sector in developed and emerging markets globally.
This guest post comes from Ahmed Ismail, a technology consultant, entrepreneur, and data center strategist.
Power, not chips, is the new growth constraint
Computing power has quietly become one of the world’s most strategic resources. It fuels cloud platforms, scientific research, and the entire artificial intelligence economy.
But as chip supply chains recover, a new bottleneck has emerged. The global AI race now runs on three invisible variables: watts, wafers, and where.
Watts: electricity supply, interconnection speed, and cooling capacity.
Wafers: the semiconductors that accelerate AI workloads.
Where: the physical geography of sites, permits, and transmission corridors.
Together, they determine who trains models on schedule and who waits.
Why the demand curve keeps steepening
AI workloads have changed what data centers mean to the grid. Training compute for frontier models has doubled roughly every five months, while inference (the 24/7 serving of those models to users) has become the permanent driver of electricity growth.
In Q2 2025 alone, cloud infrastructure spending reached $99 billion, growing 25% year over year. That consolidation of workloads into a handful of hyperscale metros has pushed local utilities, transformers, and substations past their design limits.
The International Energy Agency projects that data center electricity demand could reach 945 terawatt-hours by 2030, roughly double today’s levels.
What once was an IT planning challenge is now a macro-energy variable.
From megawatts to market value
Microsoft, Amazon, and Google are transforming their capital structure around power availability. Microsoft alone expects to spend around $80 billion in fiscal 2025 expanding AI data centers, while Amazon’s capital plan approaches $100 billion for AWS infrastructure.
Each gigawatt added represents not just compute expansion but financial leverage: contracted power, transformers, and fiber routes are now treated like balance-sheet assets.
Investors have begun pricing time-to-power (the number of months from groundbreaking to energization) as a core performance metric. The faster an operator energizes, the higher its valuation multiple.
Cooling, water, and the new geography of siting
Every megawatt of compute produces heat. As AI accelerators drive rack densities beyond 80 kW, direct liquid cooling (DLC) is becoming standard. Surveys show adoption rates climbing sharply across owner-operators.
Retrofits are complex and costly, forcing many developers toward greenfield builds where cooling, floor loads, and water loops can be integrated from the start.
Water is the second constraint. AI-driven campuses can consume millions of gallons daily. Projects that integrate recycled or reclaimed water (or repurpose waste heat) are securing permits faster and avoiding regulatory backlash.
In this phase of AI infrastructure, wet-site optionality (proximity to sustainable water sources or industrial heat sinks) is a competitive advantage.
Interconnection: the invisible currency of scale
Even with power secured, data centers stall without bandwidth. AI training runs require moving petabytes of data between clusters for checkpointing and redundancy.
The Red Sea cable incidents of 2025 showed how physical route diversity is as critical as GPU count. When two of the four main undersea cables were severed, latency for major clouds spiked across Asia and the Middle East.
Every hyperscaler now measures its moat partly by long-haul route diversity per campus. In other words, electrons mean little if bits can’t move.
Financing meets the physical grid
Energy infrastructure and AI infrastructure are converging into one asset class.
Hybrid financing structures now treat PPAs and interconnection rights as collateral, enabling developers to raise capital before energization. Sovereigns and institutional funds are underwriting multi-gigawatt campuses as part of national AI strategies.
The U.S., EU, and several Gulf states have launched grid reforms to compress interconnection queues, recognizing that queue position is the new seniority.
A project’s place in the interconnection line now dictates its cost of capital.
Industry adaptation: smarter chips, smarter grids
The industry response has been twofold.
Cloud vendors are designing custom accelerators to improve energy efficiency and reduce reliance on single suppliers. NVIDIA remains dominant, but companies like Amazon, Google, and Microsoft are quietly shifting to in-house silicon, optimizing for watts per token processed.
Meanwhile, utilities and hyperscalers are forming joint energy ventures to secure long-term supply. Nuclear-backed PPAs, hydro tie-ins, and battery-backed renewables are re-emerging as the most dependable forms of firm power.
Efficiency improvements, from algorithmic optimization to quantization, are helping, but aggregate demand continues to rise faster than efficiency gains.
Counterweights: the case for managed growth
Some scholars, including Jonathan Koomey, argue that forecasts of runaway data center demand overstate long-run energy impacts. They point to economics, efficiency, and workload scheduling as natural checks.
Still, the grid doesn’t feel theoretical. Interconnection queues are swelling, transformers remain backordered, and permitting cycles stretch into years.
For operators, the constraint is already material. The question isn’t whether compute growth will slow, it’s who can execute under real grid limits.
The new equation for competitive advantage
The AI race will no longer be decided by who builds the largest model or deploys the most GPUs. It will be determined by who can control three variables at once:
Power: Energized megawatts that translate directly into model iteration cycles.
Cooling: Thermal envelopes that allow higher density per square foot.
Connectivity: Redundant routes that keep clusters busy 24/7.
Each forms a layer of a three-dimensional moat that compounds over time.
Implications for investors, boards, and policymakers
Boards must treat energy readiness as corporate governance, not an operational detail. Permitting and grid interconnection risk now affect product delivery timelines and market capitalization.
Sovereigns are entering a new industrial-policy race. Nations that align AI development, grid capacity, and renewable buildout will attract hyperscaler FDI and local innovation.
For investors, due diligence must evolve. Metrics like energized MW per dollar of capital, queue position, curtailment exposure, and fiber diversity now drive valuation more than headline CapEx.
The bottom line
The hunt for compute has become the hunt for power.
Until global interconnection queues shorten and sustainable cooling becomes ubiquitous, energy access will remain the real competitive edge in AI infrastructure.
Those who control megawatts, water rights, and fiber routes will decide who trains and who waits.
Guest Author:
Ahmed Ismail is a technology consultant and data center strategist focused on energy, cooling, and AI infrastructure readiness. He serves as a Google Data Center Community AI Fellow and member of the Heartland Forward Gen Z AI Council.


