Amazon and OpenAI agree $38bn partnership to boost AI development

Amazon Web Services (AWS) and OpenAI have signed a seven-year, US$38bn cloud computing agreement that allows OpenAI to run core generative AI workloads on AWS infrastructure immediately.

The deal gives OpenAI access to “hundreds of thousands” of Nvidia GPUs now, with capacity slated to be fully deployed by the end of 2026 and room to expand into 2027 and beyond, according to the companies.

Under the multi-year partnership, AWS will provision compute at large scale for training and serving OpenAI models such as ChatGPT.

Amazon says clusters will use Nvidia GB200 and GB300 chips networked via Amazon EC2 UltraServers to reduce latency across interconnected systems, supporting both inference and next-generation model training.

The agreement is designed to scale to “tens of millions of CPUs” and very large numbers of GPUs as demand grows.

The size of the contract signals ongoing demand for AI infrastructure as model providers seek more reliable, secure capacity. Industry reports describe the arrangement as beginning immediately, with staged roll-outs through 2026 and optional expansion thereafter.

The move follows OpenAI’s shift to a more diversified cloud strategy after earlier exclusive arrangements. Coverage indicates Microsoft no longer holds exclusive hosting rights, while OpenAI continues to work with several providers.

Analysts frame the AWS deal as part of a wider pattern of long-term spend commitments across multiple clouds amid rising compute needs for large language models and agentic systems.

For AWS, the agreement lands as the company activates very large AI clusters, including Project Rainier—reported to comprise roughly 500,000 Trainium2 chips—aimed at high-throughput training at lower cost per token.

This background helps explain AWS’s emphasis on price, performance, scale and security in courting frontier model developers.

The partnership builds on recent steps that brought OpenAI’s open-weight models to AWS services.

In August 2025, AWS announced availability of two OpenAI open-weight models through Amazon Bedrock and SageMaker, giving enterprise developers another option for deploying generative AI while retaining control over data and infrastructure.

Media reports at the time described it as the first instance of OpenAI models being offered natively on AWS.

Amazon states that, within Bedrock, customers across sectors—including media, fitness and healthcare—are already experimenting with agentic workflows, coding assistance and scientific analysis using OpenAI technology.

The new compute deal is positioned to support that activity at larger scale and higher reliability as usage grows globally.

For international retailers and consumer brands, the AWS–OpenAI tie-up is likely to influence cloud strategy, vendor selection and AI roadmaps.

Consolidating training and inference on large, shared clusters could lower unit costs for generative AI services over time, while the use of EC2 UltraServers with Nvidia GB200/GB300 GPUs targets lower latency for production applications such as search, customer service, personalisation and supply-chain forecasting.

Observers also note that multi-cloud arrangements may become more common as leading model providers secure capacity across several hyperscalers to mitigate risk and match workload profiles to different hardware.

“Amazon and OpenAI agree $38bn partnership to boost AI development” was originally created and published by Retail Insight Network, a GlobalData owned brand.

 


The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site.

Continue Reading