AI Data Center Forecast: From Scramble to Strategy

The data center construction boom has entered a new chapter. AI adoption is accelerating, and the floor of potential demand is rising, according to Bain’s latest global data center forecast through 2030.

The early scramble of generative AI–driven demand is giving way to a more disciplined, selective, power-constrained, and execution-focused phase of growth. Hyperscaler tech companies are absorbing compute capacity, large enterprises are scaling up production-grade AI, and data center construction patterns are starting to form around clearer plans for continued growth and demand. Although the capacity growth rate isn’t certain—due to variability in AI adoption and developers shopping around for the best sites to build on—there are clear signs that momentum and growth will remain strong. Going forward, winners will be defined not by scale alone but by their ability to navigate complexity with precision.

Our forecast combines bottom-up scenario modeling with insights from across our Energy & Natural Resources and Technology practices. The latest update reflects five sector trends that have defined the past 12 months.

  1. Inference is now the center of gravity. AI workload patterns are shifting, with greater emphasis on inference at scale alongside continued frontier model training. This shift is partly due to clear traction in enterprise AI use cases. Test-time compute is reshaping infrastructure strategy, economics, and architecture, with meaningful implications for data center colocation vs. self-build, silicon diversity, and power provisioning.
  2. The pace of new construction growth has begun to stabilize. The hyperscaler investment pullback many expected didn’t occur—their investments increased meaningfully in 2025 and are expected to grow in the coming years. That said, hyperscalers are focusing more on capital efficiency and getting more selective with new deployments, particularly for AI training.
  3. Growth is concentrated but globalizing. North America has the largest data center capacity, fueled by hyperscalers’ capital expenditures. Meanwhile, sovereign AI mandates and enterprise adoption are activating regional markets across the globe. Companies face decisions about which markets can serve different workloads; they’re seeking geographic flexibility as they align compute infrastructure with latency, data sovereignty, and energy sourcing considerations.
  4. Data centers are becoming larger but more flexible. Data center “mega-campuses” (those with power capacity of at least 1 gigawatt) will become standard for frontier model training, though it appears that a relatively limited set of these data centers in specific locations will suffice to serve global demand. Although average data center sizes are increasing, the more modest requirements of inference workloads are enabling smaller, distributed data center networks.

    Data centers are also being designed to accommodate flexibility between training and inference workloads, partly by implementing multiple cooling options. Operators are trying to avoid sunk assets amid increasing complexity. They’re also exploring distributed training, which may herald a change in training architecture.

  5. Power availability is the bottleneck. Even as GPU and construction constraints ease, power access is now the critical gatekeeper of growth. Behind-the-meter (BTM) power generation is shifting build decisions and timelines. So far, BTM projects are most common in the US and are relying mostly on independent gas power. Utilities, developers, and regulators face urgent coordination pressure, and we’re already seeing examples of utilities collaborating with data center operators to effectively plan for large load requests.

The authors wish to thank Paul Bockwoldt and Fernando Valdes for their contributions to this analysis.

Continue Reading