Microsoft Q1 FY2026: The $35 Billion Quarter That Redefines the AI Factory
Microsoft’s record $34.9 billion in Q1 CapEx marks a structural shift from software margins to physical scale building the AI factory that will define the next decade of global compute.
Welcome to Global Data Center Hub. Join investors, operators, and innovators reading to stay ahead of the latest trends in the data center sector in developed and emerging markets globally.
Microsoft’s first quarter of fiscal 2026 marked a structural turning point for the entire hyperscale ecosystem. The company reported $77.7 billion in revenue, up 18% year over year, $38 billion in operating income, and $4.13 in earnings per share—each comfortably exceeding expectations. Yet the most important number wasn’t profit. It was $34.9 billion in capital expenditures, an unprecedented single-quarter outlay in technology history.
Roughly half of that investment was directed toward GPUs and CPUs, the short-lived assets that drive AI training and inference capacity. The remainder was devoted to long-duration infrastructure data centers, power systems, and finance leases with 15- to 20-year horizons that secure Microsoft’s long-term control of power and land in critical global regions. That $34.9 billion in one quarter surpasses the annual CapEx of most Fortune 100 companies. More importantly, it signals that Microsoft has fully transitioned from optimizing cloud margins to scaling physical compute at planetary scale.
Azure’s momentum makes the reason clear. The platform grew 40% year over year, driven by AI and cloud workloads that now exceed available supply. CFO Amy Hood acknowledged that the company will remain “capacity-constrained through at least the end of the fiscal year.” To meet this demand, Microsoft plans to double its global data center footprint within the next two years, adding more than 80% AI capacity in fiscal 2026 alone. The pipeline now includes dozens of hyperscale and sovereign facilities totaling multiple gigawatts under construction across North America, Europe, Asia, and Latin America.
Earnings and infrastructure: the new balance sheet
Microsoft’s financial structure now mirrors a capital-intensive energy utility more than a traditional software business. Microsoft Cloud revenue rose to $49.1 billion, up 26% year over year, while total commercial bookings surged 112% and the commercial remaining performance obligation reached $392 billion, a 51% increase. The weighted average duration of that RPO is only two years, meaning that the majority of those contracts will translate to revenue quickly.
Operating margins reached 49%, while free cash flow climbed to $25.7 billion, up 33%. Azure’s rapid expansion is the central driver. Physical bottlenecks, rather than market competition, are now the limiting factor. Every megawatt of compute that comes online is absorbed instantly by AI workloads and software demand from Microsoft 365 Copilot, GitHub Copilot, Fabric, Dynamics 365, and OpenAI’s expanding usage base.
To manage this, Microsoft has adopted a dual-track infrastructure model. The short-lived assets, mainly GPUs and CPUs, are matched to contract durations of roughly two years, aligning depreciation schedules with revenue recognition. The long-lived assets, including hyperscale campuses like Fairwater in Wisconsin, are built for multi-decade operation. This synchronization of asset life with consumption cycles effectively transforms CapEx into a recurring, predictable growth engine rather than a fixed cost.
Inside the AI factory
CEO Satya Nadella characterized the company’s approach as “maximizing tokens per dollar per watt,” a metric that encapsulates the new economics of AI infrastructure. Every dollar of capital and every watt of power is optimized for the most productive model output.
The scope of Microsoft’s buildout is staggering. The company has already increased total AI capacity by 80% in fiscal 2026 and expects to double its data center footprint within two years. The flagship project, the 2-gigawatt Fairwater AI campus in Wisconsin, will be one of the most powerful data centers on the planet when it goes online in 2026. Microsoft has also deployed the world’s first NVIDIA GB300 cluster and continues to expand a network of sovereign cloud regions now active in 33 countries. Each region is part of a global, fungible fleet designed to handle every stage of the AI lifecycle from model pre-training and fine-tuning to inference and synthetic data generation.
This architecture allows continuous modernization. Nadella confirmed that token throughput per GPU for GPT-4.1 and GPT-5 increased by more than 30% in the quarter, a direct result of hardware-software co-optimization. In practice, Microsoft’s infrastructure now functions as a global AI factory an interconnected industrial system manufacturing intelligence as efficiently as past centuries manufactured electricity.
The software flywheel that funds it
While hyperscale competitors often depend on third-party tenants to monetize capacity, Microsoft’s software ecosystem is its own demand engine. The company reported 900 million monthly active users of AI features across its products and over 150 million monthly active users of the Copilot family, including more than 90% of the Fortune 500. GitHub Copilot now supports 26 million developers and processed over 500 million pull requests in the past year. Fabric, Microsoft’s unified data and analytics platform, grew revenue by 60% year over year, while Dynamics 365 advanced 18%.
Every query, code commit, or AI workflow consumes Azure compute. Each software interaction feeds directly into infrastructure utilization, creating a closed economic loop: software adoption drives compute consumption, which justifies more CapEx, which in turn enables faster product innovation. This feedback cycle AI products consuming the compute they generate is Microsoft’s structural advantage. It transforms CapEx risk into self-funding growth.
OpenAI: the anchor tenant
That feedback loop is reinforced by Microsoft’s renewed partnership with OpenAI. The new agreement added $250 billion in incremental Azure commitments, exclusive API rights through 2030, and extended model and product IP rights through 2032. The partnership has already yielded a tenfold return on Microsoft’s initial investment. None of the new $250 billion in commitments were included in Q1’s reported bookings or RPO, which means future utilization is effectively pre-sold before the corresponding data centers come online.
In effect, Microsoft has securitized its AI infrastructure. Each hyperscale project now carries embedded revenue before it reaches completion. This model allows Microsoft to accelerate construction without diluting returns, financing future capacity through guaranteed long-term workloads.
The near-term outlook
For the second quarter, Microsoft guided to revenue between $79.5 and $80.6 billion, representing 14% to 16% growth. Azure is projected to grow roughly 37% in constant currency, while Microsoft Cloud’s gross margin is expected to reach about 66%, down slightly from last year due to higher AI infrastructure investment. CapEx will rise sequentially as additional GPU deliveries accelerate throughout 2026.
Hood emphasized that margin compression is intentional, stating, “We’re investing to meet booked business today.” With $45.1 billion in operating cash flow generated in a single quarter, Microsoft can finance this entire global buildout internally. The company’s CapEx trajectory for fiscal 2026 is already pacing above 2025, indicating that the AI infrastructure cycle is far from peaking.
Strategic close
Microsoft has crossed the threshold from cloud provider to compute sovereign. Its infrastructure strategy now spans every layer of the AI economy hardware, power, models, and software monetization creating a vertically integrated system that converts CapEx into recurring revenue at global scale.
The defining metric of this era is no longer cost per compute hour; it’s efficiency measured in tokens per dollar per watt. Microsoft’s $34.9 billion quarter shows what that looks like in practice: energy, capital, and AI output fused into a single economic engine.
Where previous decades were defined by software scale, this one will be defined by infrastructure velocity how fast compute can be financed, powered, and delivered to meet the world’s intelligence demand. Microsoft is already years ahead, building the physical grid that will underpin the next trillion-dollar compute cycle.

