Amazon Web Services (AWS) and OpenAI have signed a seven-year, US$38bn cloud computing agreement that allows OpenAI to run core generative AI workloads on AWS infrastructure immediately.
The deal gives OpenAI access to “hundreds of thousands” of Nvidia GPUs now, with capacity slated to be fully deployed by the end of 2026 and room to expand into 2027 and beyond, according to the companies.
Under the multi-year partnership, AWS will provision compute at large scale for training and serving OpenAI models such as ChatGPT.
Amazon says clusters will use Nvidia GB200 and GB300 chips networked via Amazon EC2 UltraServers to reduce latency across interconnected systems, supporting both inference and next-generation model training.
The agreement is designed to scale to “tens of millions of CPUs” and very large numbers of GPUs as demand grows.
The size of the contract signals ongoing demand for AI infrastructure as model providers seek more reliable, secure capacity. Industry reports describe the arrangement as beginning immediately, with staged roll-outs through 2026 and optional expansion thereafter.
The move follows OpenAI’s shift to a more diversified cloud strategy after earlier exclusive arrangements. Coverage indicates Microsoft no longer holds exclusive hosting rights, while OpenAI continues to work with several providers.
Analysts frame the AWS deal as part of a wider pattern of long-term spend commitments across multiple clouds amid rising compute needs for large language models and agentic systems.
For AWS, the agreement lands as the company activates very large AI clusters, including Project Rainier—reported to comprise roughly 500,000 Trainium2 chips—aimed at high-throughput training at lower cost per token.
This background helps explain AWS’s emphasis on price, performance, scale and security in courting frontier model developers.
The partnership builds on recent steps that brought OpenAI’s open-weight models to AWS services.
In August 2025, AWS announced availability of two OpenAI open-weight models through Amazon Bedrock and SageMaker, giving enterprise developers another option for deploying generative AI while retaining control over data and infrastructure.
Media reports at the time described it as the first instance of OpenAI models being offered natively on AWS.
Amazon states that, within Bedrock, customers across sectors—including media, fitness and healthcare—are already experimenting with agentic workflows, coding assistance and scientific analysis using OpenAI technology.
