OpenAI Expands Cloud Partnerships with Amazon Web Services
In the rapidly evolving landscape of artificial intelligence, OpenAI continues to forge significant multi-billion dollar collaborations, with its latest alliance being with Amazon Web Services (AWS). While OpenAI has long relied on Microsoft’s Azure platform following a $10 billion investment, the surging demand for generative AI capabilities has prompted the organization to adopt a multi-cloud strategy to meet growing computational needs.
A Strategic Multi-Year Collaboration
OpenAI and AWS have formalized a strategic partnership that leverages AWS’s cutting-edge cloud infrastructure to support OpenAI’s expansive AI workloads. This multi-year agreement, effective immediately, is valued at approximately $38 billion and is projected to scale over the next seven years, reflecting the immense computational resources required to power next-generation AI models.
Unprecedented Scale and Performance
Under this partnership, OpenAI gains access to AWS’s vast compute resources, including hundreds of thousands of NVIDIA GPUs and the flexibility to scale up to millions of CPUs. AWS’s expertise in managing large-scale, secure, and reliable AI infrastructure is well established, with clusters exceeding 500,000 chips already operational. This infrastructure is designed to handle diverse AI workloads, from real-time inference powering ChatGPT to training sophisticated models that will define the future of AI.
Optimized Architecture for AI Workloads
OpenAI’s deployment on AWS utilizes Amazon EC2 UltraServers, which cluster NVIDIA’s latest GB200 and GB300 GPUs to maximize processing efficiency and throughput. This architecture supports a broad spectrum of AI tasks, ensuring flexibility and adaptability as OpenAI’s requirements evolve. The infrastructure is expected to be fully operational by the end of 2026, with potential expansions extending into 2027 and beyond.
“Scaling frontier AI demands vast and dependable compute resources. Our collaboration with AWS will enhance the entire compute ecosystem, enabling this new era of advanced AI accessible to everyone.” – Sam Altman, CEO and Co-founder of OpenAI.
Driving Innovation and Accessibility in AI
Matt Garman, CEO of AWS, emphasized the critical role AWS plays in supporting OpenAI’s ambitious AI projects: “As OpenAI pushes the boundaries of AI innovation, AWS provides the foundational infrastructure necessary to meet their extensive computational needs. Our optimized, instantly available compute resources uniquely position us to support OpenAI’s expansive workloads.”
Earlier this year, OpenAI’s open-weight foundation models were integrated into Amazon Bedrock, AWS’s platform for building and scaling generative AI applications. This integration has broadened the model options available to millions of AWS customers, facilitating diverse applications across industries.
Real-World Applications and Industry Adoption
OpenAI’s models on Amazon Bedrock have been adopted by thousands of customers, including notable organizations such as Bystreet, Comscore, Peloton, Thomson Reuters, Triomics, and Verana Health. These entities utilize OpenAI’s technology for a variety of advanced tasks, including autonomous workflows, software development, scientific research, and complex mathematical problem-solving.
For those interested in exploring OpenAI’s open-weight models on Amazon Bedrock, detailed information is available at aws.amazon.com/bedrock/openai.

