AI needs power desperately. Here’s how to invest in companies profiting from the pain.

By Jurica Dujmovic

The shortage is a lucrative opportunity – but the window is brief

AI computing workloads could consume around 500 terawatt-hours annually by 2027 – about twice the U.K.’s total electricity consumption in 2023.

Rising infrastructure costs and mounting capital constraints are deflating the AI boom. The hyperscalers can’t solve their computing problems fast enough, and that’s creating a rare arbitrage opportunity.

The solution right now isn’t building data centers. The current investment opportunity lies in the temporary gap between exploding AI demand and the physical constraints of centralized infrastructure expansion. A handful of companies are exploiting this window – which likely will be a 24-to-36- month opportunity. For investors who understand the timing, it’s a compelling hedge against the AI infrastructure bottleneck.

Physical barriers

40% of AI data centers will face power constraints by 2027.

AI’s limiting factor is no longer algorithms or data – it’s the brute-force physics of data-center expansion. Training large models demands tens of thousands of GPUs, dedicated networking and enormous power consumption. Gartner forecasts that 40% of AI data centers will face power constraints by 2027.

The math is brutally simple: AI computing workloads could consume around 500 terawatt-hours annually by 2027 – about twice the U.K.’s total electricity consumption in 2023. This demand spike is already showing up in the grid.

Dominion Energy (D), the biggest utility company in Virginia, nearly doubled its data-center power capacity under contract between July and December 2024, and the trend has persisted.

Even with Microsoft (MSFT), Alphabet (GOOG) (GOOGL), Amazon.com (AMZN) and Meta Platforms (META) spending a combined $370 billion on capex in 2025, they can’t build fast enough. Construction and commissioning typically take 12 to 36 months, but when you include permitting and power-grid build-outs, a full data-center project can stretch to three to six years.

Time and money

The economics are compelling during this shortage window

This time gap is the entire investment thesis.

When essential resources become expensive and concentrated, parallel markets emerge. We saw this with electricity co-ops in the early 20th century, independent oil producers during OPEC’s reign and broadband resellers in the early internet era.

With AI, the scarce resource is GPU computing. Several companies are building marketplaces that aggregate idle capacity – consumer GPUs, academic clusters, enterprise overstock – and resell it at a fraction of centralized data-center costs.

The economics for these companies are compelling during this shortage window:

Cost structure advantage: Alternative networks don’t finance data centers with debt. They pay participants directly for computing capacity through incentive structures, converting spare capacity into productive assets. The cost of scaling shifts from massive capex to distributed incentives.

Speed to market: While hyperscalers wait 18 to 36 months for new facilities, these networks can add capacity node by node, with no billion-dollar commitments up front.

Arbitrage pricing: These companies are capturing demand from the smaller labs, indie studios, emerging markets and others that are priced out of AWS GPU pricing but still need computing.

The catch? The explosive growth window is finite. These networks will remain viable alternatives even after constraints ease – serving cost-sensitive workloads, emerging markets and indie developers – but the opportunity for substantial investment gains compresses as growth normalizes and hyperscalers’ capacity comes online.

Read: AI data centers need juice. The next hot stocks give it.

How to play the computing shortage

Again, this isn’t a moonshot bet. It’s an infrastructure hedge with a defined window. Here are three approaches, ranked by risk profile:

Render Network: Aggregates idle GPU capacity from individuals and studios, reselling to the highest bidder for rendering and AI workloads. Think of it as Airbnb for GPUs – idle capacity that would otherwise sit dormant gets monetized, and users get computing at a fraction of data-center pricing. Rather than operating expensive data centers, Render pays a fraction of that cost to harvest capacity from thousands of computers.

io.net: Focuses on generic GPU computing for AI training and inference. The platform aggregates capacity from data centers, crypto miners and consumer hardware, creating a distributed alternative to centralized cloud providers. Its network is newer and more speculative than Render, but it’s capturing demand from AI startups that can’t afford or access hyperscaler GPU allocations.

Akash Network: Takes the concept broader, offering a marketplace for general cloud computing and storage beyond just GPUs. This positions it as infrastructure for the full stack, not just AI-specific workloads. Akash is a privately held company but it does have a tradeable crypto token, AKT. This is the highest-risk play in this category, but offers the most diversified exposure if decentralized computing extends beyond AI.

These are crypto token plays – not stocks

Before going further, understand what you’re actually buying. All three of these networks operate through native cryptocurrency tokens, not traditional equity. There is no stock ticker, no brokerage-account access and no public-equity wrapper for these businesses.

Direct exposure requires navigating cryptocurrency exchanges:

— Render Network (RENDER) trades on Coinbase, Binance and Kraken.

— is listed on select crypto exchanges such as Binance and Gate.io, with liquidity varying by venue and region.

— Akash Network (AKT) trades on Coinbase, Kraken and similar venues.

This means dealing with crypto custody – whether through exchange accounts or self-custody wallets – and accepting the regulatory uncertainty that comes with token investments. If you’re not comfortable with that infrastructure, this thesis won’t work for you.

For investors who prefer traditional equity exposure, the closest alternatives are second-order beneficiaries of the same capacity constraint:

— Data-center operators: Equinix EQIX, Digital Realty Trust DLR

— Power infrastructure: Dominion Energy, Duke Energy DUK, NextEra Energy NEE

— GPU supply chain: Nvidia NVDA, Broadcom AVGO, Super Micro Computer SMCI

But here’s the critical distinction: These publicly traded companies benefit from the shortage itself – not from the temporary arbitrage window created by aggregating idle distributed capacity. They’ll do well regardless of whether decentralized computing succeeds. What they won’t give you is direct exposure to the specific dislocation that is going on now.

Risk factors

Let’s be clear about what could go wrong with this arbitrage strategy:

Performance and reliability: Distributed GPU networks face inherent challenges with performance variance, latency and quality control. Enterprise customers paying for AI infrastructure demand reliability. If these networks can’t match centralized performance, the arbitrage doesn’t matter – customers won’t switch.

Security and compliance: Regulated industries won’t run sensitive workloads on unknown hardware scattered globally. These networks are limited to specific use cases where data sovereignty and compliance aren’t blockers.

Hyperscaler catch-up timeline: The base case assumes these constraints ease through 2027-’29 as new data centers and power infrastructure come online. If power constraints extend beyond 2029, the high-growth window for these companies stays open.

Regulatory uncertainty: Several of these networks operate in regulatory gray areas. If governments decide to regulate decentralized computing infrastructure, costs increase and flexibility decreases.

Crypto market contagion: These tokens trade on crypto exchanges and correlate with broader crypto markets. A bitcoin crash or crypto regulatory crackdown could affect these assets regardless of fundamentals.

The investment timeline

The window runs from early 2026 through 2027-’28, which is the core 24-to-36-month period. The broader infrastructure constraint lasts longer, but the outsized arbitrage compresses as hyperscalers come online. This aligns with the infrastructure constraint timeline I’ve been tracking, but extends beyond the initial shortage as power-grid limitations persist.

Q1 2026: Begin building positions as the 2027 power constraint window becomes consensus view. Dollar-cost average to smooth volatility.

Q2 2026-Q2 2027: Peak growth opportunity as AI demand continues accelerating while centralized capacity remains severely constrained. These networks capture maximum long-tail demand priced out of hyperscaler infrastructure.

Q3 2027-Q2 2028: Growth continues, but begins normalizing as new data centers come online and power-grid upgrades progress. Monitor hyperscaler capacity announcements closely – each major facility completion incrementally compresses the arbitrage.

Q3 2028-Q4 2029: Maturation phase. These networks settle into specialized roles – emerging markets, cost-sensitive workloads, indie developers. They remain viable businesses but growth normalizes.

It is important to understand that this isn’t a binary “it works until it doesn’t” thesis. It’s a maturation curve where networks transition from high-growth arbitrage plays to steady-state infrastructure alternatives.

The broader implication

If GPU aggregation networks prove they can deliver reliable computing at competitive prices during the 2026-’28 constraint period, they will establish legitimacy. Even if hyperscalers eventually recapture market share, these networks will have carved out niches in emerging markets, indie studios and cost-sensitive workloads.

(MORE TO FOLLOW) Dow Jones Newswires

12-04-25 1637ET

Copyright (c) 2025 Dow Jones & Company, Inc.

Continue Reading