top of page

OpenAI and Microsoft Rewrote Their Partnership - And OpenAI Was Losing Either Way

OpenAI and Microsoft, the most consequential partnership in AI, ended their exclusive cloud arrangement on April 27 - quietly erasing a clause that gave Microsoft sole rights to distribute OpenAI's models and forced enterprise customers on AWS or Google Cloud to work around it. Both companies framed this as a next partnership phase. The timing tells a different story. Anthropic's annualized revenue hit $30 billion in April 2026, surpassing OpenAI's roughly $25 billion for the first time - and Claude has been available on AWS, Google Cloud, and Azure simultaneously for months. The OpenAI Microsoft partnership, once the most valuable distribution arrangement in tech, had quietly become OpenAI's most expensive constraint.

What Changed in the OpenAI Microsoft Partnership

The revised terms, announced simultaneously by both companies on April 27, restructure nearly every financial and distribution element of their original agreement.

The most significant change: Microsoft's exclusive license to OpenAI's intellectual property converts to a non-exclusive license through 2032. OpenAI is now free to sell its products across any cloud provider - AWS, Google Cloud, or others - without Microsoft's approval. The one carve-out: OpenAI products still ship "first on Azure" unless Microsoft cannot or chooses not to support the necessary capabilities.

On the financial side, the changes cut both ways. Microsoft will stop paying OpenAI a revenue share on products it resells through Azure. OpenAI, however, continues paying Microsoft a revenue share of 20% through 2030 - now subject to an undisclosed cap. OpenAI's commitment to purchase at least $250 billion in Azure services by 2032 remains intact.

The deletion of the AGI clause is quietly the most structurally significant change. The original agreement contained a provision that would have allowed OpenAI to walk away from its commercial obligations to Microsoft if it declared artificial general intelligence achieved. That escape hatch is gone. Whatever OpenAI believed about near-term AGI timelines in 2023, those beliefs shaped different contract terms than what both parties agreed to sign this week.

Amazon CEO Andy Jassy publicly welcomed the restructured deal, noting it meant OpenAI's models would become available to customers on AWS Bedrock - the managed AI service through which Amazon distributes third-party foundation models. That statement from a competitor's CEO, appearing in the same news cycle as OpenAI's announcement, signals exactly how this deal is being read in the market.

"The new deal with Microsoft was essential for OpenAI to be successful in the enterprise market," said Gil Luria, analyst at D.A. Davidson & Co. "AWS and Google Cloud enterprise customers have been limited in their ability to integrate OpenAI's products because of the exclusive relationship and will now be more likely to consider OpenAI alongside Anthropic."

Microsoft retains its position as OpenAI's primary cloud infrastructure partner and holds a non-exclusive IP license through 2032. What it gave up is the moat.

Why It Matters Beyond Cloud Logistics

The practical effect of the OpenAI Microsoft partnership restructuring is simple: an enterprise running its infrastructure on AWS no longer has to make an architectural exception to use OpenAI's models. Before April 27, calling GPT-4 from an AWS-native stack meant routing requests to Azure endpoints - adding latency, cross-cloud data transfer costs, and compliance friction that most enterprise procurement teams simply won't accept. Many didn't accept it: they chose Anthropic instead.

That friction was architectural, but its consequences were commercial. Anthropic built its enterprise business precisely in the space the exclusive arrangement foreclosed. Claude is the only frontier AI model available simultaneously on all three major cloud platforms - AWS Bedrock, Google Cloud Vertex AI, and Microsoft Azure - and that broad distribution is not coincidental to Anthropic's enterprise performance. The company now counts more than 1,000 businesses spending over $1 million annually, a number that more than doubled year-over-year. Approximately 80% of Anthropic's revenue comes from enterprise customers, versus a more consumer-heavy composition at OpenAI.

The competitive gap this created wasn't invisible to OpenAI. Sam Altman publicly acknowledged "points of tension" with Microsoft months before the restructuring was announced. What he didn't say publicly was that the tension had a specific commercial cost: Anthropic's annualized revenue, which sat at roughly $9 billion at the end of 2024, had grown to $30 billion by April 2026 - surpassing OpenAI for the first time.

The scale of that enterprise gap matters because enterprise AI contracts are sticky. A company that builds internal tooling around Claude on AWS Bedrock doesn't switch because a competitor's model became available on the same platform. It switches when its next major workflow initiative comes up for a tool decision - which might be six months to two years away. Every quarter that OpenAI was unavailable on AWS and Google Cloud without architectural workarounds was a quarter when enterprises defaulted to Anthropic or Gemini for new projects.

Google's response to the restructured deal was immediate and deliberate. At its Cloud Next 2026 conference, Google announced a $750 million partner fund specifically targeting agentic AI deployments - a direct signal that it views newly multi-cloud OpenAI as a potential distribution partner, not just a competitor's preferred model provider.

The broader implication extends beyond any single company's cloud strategy. When the most capable AI models are accessible from any cloud provider on equivalent terms, the competition shifts entirely to model quality and developer experience. Enterprise infrastructure teams don't choose clouds to get AI - they choose AI and expect it to work wherever their infrastructure already runs. The OpenAI Microsoft exclusivity arrangement was built on a world where that expectation wasn't yet mainstream. That world no longer exists.

The Uncomfortable Truth Behind the Deal

The narrative that emerged on April 27 - two partners maturely evolving their relationship - obscures a more complicated reality. The exclusive arrangement wasn't a constraint Microsoft imposed on OpenAI. OpenAI accepted it.

In 2019, Microsoft invested $1 billion in OpenAI, and the two companies established a cloud partnership that gave Microsoft exclusive access to OpenAI's research, products, and APIs. At the time, OpenAI needed the compute. The deal made sense: Microsoft got a stake in the most promising AI lab in the world; OpenAI got the infrastructure to train and serve its models without raising additional capital. When Microsoft invested another $13 billion following ChatGPT's viral launch in late 2022, the exclusive terms deepened. OpenAI received cash and cloud capacity; Microsoft received the right to be the sole commercial distribution channel for everything OpenAI built.

The original logic was sound. The execution ran into reality.

Anthropic was incorporated in 2021 by former OpenAI researchers and made a deliberate choice to distribute Claude across all three major cloud providers from early in its commercial life. That choice had no obvious upside in 2022, when OpenAI was the clear market leader and Azure was the obvious enterprise AI destination. By 2025, it had become a structural advantage: every enterprise already running on AWS or Google Cloud could integrate Claude with zero additional cloud complexity. OpenAI's models required a workaround.

The financial picture adds context to the urgency. OpenAI's own projections show a net loss approaching $14 billion in 2026, with cash burn estimated at $17 billion - a number that doesn't turn cash-flow positive until 2030. The Wall Street Journal reported in late April that OpenAI has missed its target to reach 1 billion weekly active users by the end of 2025, missed its annual revenue target for ChatGPT, and missed multiple monthly revenue targets this year. An IPO, which OpenAI has been signaling for months, requires a credible enterprise growth story. The exclusive arrangement made that story harder to tell.

The Amazon deal crystallized the conflict. In February 2026, Amazon committed up to $50 billion to OpenAI - a deal whose terms created a direct cloud exclusivity conflict with OpenAI's existing Microsoft obligations. The initial $15 billion was available immediately, with another $35 billion contingent on conditions being met. The investment came with a condition: AWS would have exclusive rights to serve OpenAI's new agent-building tool, Frontier. That term directly conflicted with OpenAI's obligation to offer its products exclusively through Microsoft. The restructuring announced on April 27 resolved this legal contradiction. Ending the Microsoft exclusivity was, in one sense, the price of unlocking the Amazon investment.

The AGI clause removal deserves separate attention. When it was written into the original partnership, the clause functioned as a kind of asymmetric option: if OpenAI achieved AGI, it could unilaterally terminate its commercial obligations to Microsoft, presumably because at that point OpenAI would have leverage to renegotiate everything from a position of power. Removing the clause suggests one of two things: either OpenAI's near-term AGI timeline expectations have become more conservative, or both parties concluded that a clause with that much ambiguity - what counts as AGI? who decides? - created more legal uncertainty than it resolved. Either reading is unflattering to the original drafting.

Microsoft's position is not as weakened as the headline suggests. The company stops paying OpenAI a revenue share on Azure resales, which reduces a financial obligation without losing a capability. It retains a non-exclusive IP license through 2032, which means every OpenAI model Microsoft has been building products on top of - Copilot, Azure AI Studio, GitHub Copilot - remains accessible. And OpenAI's $250 billion Azure purchase commitment is still in place. Microsoft traded exclusivity for financial relief and retained the infrastructure relationship that actually drives its AI revenue.

What Microsoft also did, quietly and in parallel, was accelerate its own model development. The Phi series of small language models represents Microsoft's hedge: if OpenAI's models become available everywhere, Microsoft needs AI capabilities it owns outright. That hedge looks more important now than it did when the partnership was exclusive.

How the AI Cloud Race Just Changed

The restructuring effectively ends a two-year period in which AI cloud strategy was partly determined by which foundation model provider each cloud had locked up.

Before April 27: AWS had Anthropic; Google had Anthropic and Gemini; Microsoft had OpenAI exclusively. Enterprise buyers made infrastructure decisions partly around which cloud gave them access to which models. That segmentation is dissolving.

Anthropic's advantage built during the exclusivity period is real and won't evaporate immediately. The company has more than 1,000 enterprise clients spending over $1 million annually, and switching costs in enterprise AI are meaningful - workflows, fine-tuned prompts, integration work. But Anthropic no longer holds a structural distribution advantage. OpenAI, once fully integrated into AWS Bedrock and potentially Google Cloud Vertex AI, competes for those enterprise relationships on roughly equal terms.

Google's position is worth watching closely. The company made a $40 billion commitment to Anthropic just three days before the OpenAI-Microsoft restructuring - a move that looks, in retrospect, like a hedge placed before the board changed. Google Cloud now has potential access to both leading frontier AI providers. Amazon and Google are the structural winners of a world where OpenAI competes across all clouds. Microsoft's AI moat now rests on the depth of its enterprise relationships, developer tooling, and the $250 billion Azure commitment that keeps OpenAI's infrastructure spending on its platform - not on exclusive model access.

The historical parallel that holds up best here is not a tech deal - it's content distribution. When Netflix moved from licensing content exclusively to building its own while distributing across platforms, the transition reduced its leverage but expanded its addressable market. OpenAI's multi-cloud shift follows the same logic: give up the exclusivity premium, gain the audience.

The key variable is whether OpenAI's enterprise market expansion arrives fast enough to matter. The company's IPO timeline, its cash burn rate, and its competitive position relative to Anthropic all operate on timelines measured in quarters, not years.

What's Next

The most immediate consequence is the activation of the Amazon deal's terms. AWS Bedrock integration with OpenAI's models - including the exclusive distribution of Frontier, OpenAI's new agent-building tool - can now proceed without legal conflict with the Microsoft agreement. Enterprise developers on AWS will likely see OpenAI model availability within weeks, not months.

Google Cloud's next step is less certain. The revised deal removes the legal barrier to a Google-OpenAI distribution arrangement, and Google is reportedly reviewing the new terms. A Google Cloud deal would give OpenAI the same three-platform footprint that Anthropic has used to build its enterprise business - and would mark a genuinely new competitive configuration in the AI infrastructure market.

For OpenAI's IPO ambitions, the multi-cloud transition matters more as a story than as immediate revenue. Enterprise buyers don't switch AI providers on a quarterly basis. The partnership restructuring expands OpenAI's total addressable enterprise market on paper; realizing that expansion will take time that OpenAI's cash burn rate does not make cheap. But the story matters to investors. An OpenAI that sells through only one cloud is an OpenAI whose enterprise ceiling is bounded by Azure's footprint. An OpenAI that sells across AWS, Google Cloud, and Azure is an OpenAI whose addressable market looks more like the entire enterprise AI market - a fundamentally different multiple on any forward revenue projection.

The deeper question the deal raises is about the future of AI infrastructure competition. If frontier AI models become available on equivalent terms across all major cloud providers, the differentiation moves entirely to the model layer - capability, reliability, cost per token, context handling, domain-specific performance. That's a race OpenAI has been running since 2022. The difference is that it now has to run it without the protective barrier of cloud exclusivity.

Microsoft will remain OpenAI's largest single cloud customer by a wide margin, anchored by that $250 billion Azure commitment. But "primary partner" now means something different than it did last week. It means preferred vendor, not exclusive distributor. In a market where Anthropic, Google, and Meta are all shipping competitive frontier models, that distinction matters more than either company's joint press release suggested.

The OpenAI Microsoft partnership restructuring is less a story about two companies renegotiating a contract and more a story about what happens when the distribution moat you built stops protecting you. The exclusive arrangement that gave Microsoft a dominant position in enterprise AI also gave Anthropic a two-year window to build its enterprise business without facing OpenAI's full distribution. That window is closing - but the gap it created is real.

For knowledge workers trying to navigate an AI landscape that's becoming more complex, not simpler, the platform question matters less than the workflow question. When you need AI that actually knows your work - not just the public internet - which cloud it runs on is the wrong variable. If you're thinking about how to build a personal AI knowledge base that stays with you regardless of which frontier model wins the enterprise cloud race, that's the more durable question worth asking.

Get started for free

A local first AI Assistant w/ Personal Knowledge Management

For better AI experience,

remio only supports Windows 10+ (x64) and M-Chip Macs currently.

​Add Search Bar in Your Brain

Just Ask remio

Remember Everything

Organize Nothing

bottom of page