US$350B AI Data Centre Boom Reshapes U.S. Economy as Power Demand Surges

Image Credit: Jordan Harrison | Splash

America’s largest technology companies are expected to spend more than US$350 billion on AI-related infrastructure in 2025, according to Goldman Sachs and recent Reuters earnings analysis. This record investment, driven by hyperscaler demand for advanced GPUs, high-density data centres, and specialised networking hardware, continues to support U.S. economic growth while placing unprecedented strain on regional power grids.

The surge reflects a fundamental shift in digital competition: compute capacity has become a defining strategic advantage. The rapid scaling of frontier models has pushed companies to expand infrastructure at a pace unmatched in previous technology cycles, reshaping financial markets and straining electricity systems nationwide.

The Roots of the AI Buildout

The acceleration of generative AI since late 2022 has driven a structural jump in hardware demand. Nvidia, whose GPUs dominate large-scale AI training, surpassed US$3 trillion in value in June 2024, briefly becoming the world’s second-most valuable public company. It then surged past US$5 trillion by October 2025, making it the most valuable publicly listed company in the world.

Goldman Sachs estimates that global AI-related infrastructure spending could reach US$3–4 trillion by 2030, fuelled by ever-larger model architectures, rising energy requirements, and growing demand for compute-intensive multimodal AI systems. Analysts expect future frontier-model clusters to scale into the hundreds of thousands of GPUs, reflecting rising computational intensity and increasingly complex workloads.

How Big Tech Is Spending

Public filings and earnings disclosures show sharply rising capital expenditures across the major platforms:

Microsoft

  • Analysts expect Microsoft’s FY2025 capex to approach or exceed US$100 billion, with the company attributing a “significant majority” of spending to AI-related infrastructure for Azure and Copilot.

  • Microsoft is expanding data-centre campuses across multiple U.S. states, though no verified figures break down capex by region.

Alphabet (Google)

  • Alphabet raised its 2025 capex guidance first to US$85 billion, and more recently to US$91–93 billion, with most of the increase directed toward AI-optimised data centres, advanced TPU clusters, and global cloud expansion.

  • Google maintains major campuses in Oregon, Iowa, Georgia, and Texas, but has issued no confirmed figures on facility-specific upgrades.

Meta Platforms

  • Meta projects US$70–72 billion in 2025 capex, the highest in its history, focused heavily on compute and data-centre expansion to support Llama 3 and later large-scale models.

  • The company is financing major infrastructure through the Hyperion project in Louisiana, a private-credit-backed structure valued at nearly US$30 billion, one of the world’s largest data-centre SPVs to date.

Amazon (AWS)

  • Amazon expects US$125 billion in capital spending for 2025, with a substantial portion allocated to AWS data-centre deployment.

  • AWS continues to expand across multiple states, including major builds in the Midwest, though no publicly verified data supports specific multi-tens-of-billions allocations to individual regions.

Collectively, these figures confirm that 2025 represents the largest AI-infrastructure investment year in U.S. history.

Financing the Expansion: SPVs and Longer Hardware Lifespans

Hyperscalers are relying on two major mechanisms to manage rising capital costs:

1. Private-credit and SPV financing

  • Meta’s nearly US$30 billion Hyperion financing is one of the largest data-centre SPVs ever assembled.

  • Morgan Stanley estimates as much as US$800 billion in potential private-credit opportunities across digital and AI-related infrastructure over several years.

2. Extending the useful life of servers

  • Alphabet has extended server depreciation to six years.

  • Amazon and Microsoft have increased depreciation lifespans to five years or more.

  • Research from Princeton notes that frontier-class GPUs often become effectively obsolete within 1–3 years for top-tier AI training workloads, underscoring a gap between financial reporting and technical reality.

Economic Ripple Effects

The AI-driven capex boom is reshaping both markets and labour demand:

  • Analysts estimate that roughly two-thirds to three-quarters of the S&P 500’s gains from 2023 to 2025 have come from a small group of AI-heavy companies.

  • Increasing investment in construction, high-voltage electrical equipment, semiconductors, and specialised labour has supported recent U.S. GDP growth.

  • Economists note that measurable AI-generated revenue remains uneven across firms, with investment outpacing short-term returns.

America’s Power Grid Under Strain

Rising electricity consumption

  • U.S. data centres consumed just over 4% of national electricity in 2024, according to IEA data.

  • Forecasts from the IEA, ACEEE, and major financial institutions suggest data centres could consume around 8–11% of U.S. electricity by 2030 under high-AI-adoption scenarios.

Regional price impacts

  • Capacity-market and wholesale-power prices in several AI-clustered regions, particularly parts of the PJM Interconnection, have risen by well over 200% in recent auctions compared with levels from just a few years earlier, reflecting rapid load growth and constrained supply.

  • Analysts caution that data centres are one of several drivers of electricity price increases; fuel costs, transmission constraints, and long-term underinvestment also contribute.

Grid upgrade requirements

  • PJM, which serves 65 million people, faces tens of billions of dollars in required transmission and reliability upgrades to accommodate rising demand.

  • Efficiency studies indicate that improved cooling, load shifting, and chip-level optimisation could moderate growth, though adoption of these measures remains gradual.

What Comes Next: Regulation and System Redesign

As infrastructure spending accelerates, regulators are increasing scrutiny:

  • The Federal Energy Regulatory Commission (FERC) recently intervened in a Pennsylvania nuclear co-location arrangement involving Amazon, signalling closer oversight of large, energy-dense data-centre power deals.

  • States and utilities are imposing more rigorous grid-impact studies and approval processes for hyperscale developments.

Looking forward, analysts expect:

  • More distributed compute architectures, including regional edge-AI clusters, to reduce transmission loads.

  • Co-location with energy-dense resources, including renewables and next-generation nuclear technologies.

  • Incremental efficiency gains in GPUs, cooling systems, and facility design that may slow overall electricity-demand growth over time.

Combined Big Tech capex is now approaching US$400 billion in 2025, and analysts expect spending to remain at or above this level into 2026. For now, the AI infrastructure race shows little sign of slowing, binding the future of AI innovation ever more tightly to the resilience of the U.S. power grid.

3% Cover the Fee
TheDayAfterAI News

We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.

Previous
Previous

Gemini 3.0, GPT-5.1 & Grok 4.1: New AI Tools Cut Research Time by 50%

Next
Next

153,074 U.S. Job Cuts in October, With 31,000 Linked to AI: Workplace Shift Accelerates