top of page

Google Just Committed $40 Billion to Anthropic. It Is Also Competing Against It.

Google announced it will invest up to $40 billion in Anthropic on April 24, 2026. Four days earlier, Amazon had committed up to $25 billion in the same company. Together, the two largest cloud platforms on the planet just placed a combined $65 billion bet on an AI startup whose flagship model is, by Google's own internal assessment, beating Google's model in the enterprise market Google needs most.

That internal assessment came from Sergey Brin. In a memo that leaked around the same time the investment was announced, Brin told Google engineers that Gemini is falling behind Claude in agentic coding, and that the company needs to "urgently bridge the gap." The investment and the memo are not contradictions. They are the same strategy, expressed in two different currencies.

This is the strangest deal in the current AI race: the company writing the check is also scrambling to build the product that makes the check unnecessary.

What Happened: $40 Billion in Cash and Compute

The structure of Google's Anthropic investment is split into two tranches.

According to CNBC's reporting on the deal, the first tranche is $10 billion in immediate cash, invested at Anthropic's current valuation of $350 billion, unchanged from the February Series G. The second is up to $30 billion contingent on undisclosed performance milestones, leaving the real total somewhere between $10 billion and $40 billion depending on how those targets are defined.

Alongside the cash, Google Cloud has committed to delivering 5 gigawatts of computing capacity to Anthropic over five years, beginning in 2027. This is not a side arrangement: the compute commitment is likely the more strategically significant part of the deal. Anthropic's model training requires enormous infrastructure, and locking in 5GW of TPU capacity with Google is a multi-year infrastructure guarantee that no amount of cash can replicate.

Google already owned approximately 14 percent of Anthropic before this announcement, representing more than $3 billion in prior investments. The new deal is structured so that Google's ownership stays at or below 15 percent, and the company holds no board seat at Anthropic. Those constraints were almost certainly designed with antitrust regulators in mind.

Amazon's separate $25 billion anthropic funding deal, announced on April 20, follows a similar architecture: $5 billion immediately, up to $20 billion tied to commercial milestones, and Anthropic committing to spend more than $100 billion on Amazon Web Services over the next decade. Amazon also gets approximately 1 gigawatt of Trainium2 and Trainium3 chip capacity deployed for Anthropic by end of 2026.

The result of both deals: Anthropic now has confirmed anthropic funding commitments from two of the three largest cloud providers totaling up to $65 billion, plus infrastructure guarantees spanning the next decade. This scale of anthropic funding, from two competing cloud platforms simultaneously, has no direct precedent in enterprise software history.

Why Google Is Funding the Company That Is Beating Its Own Product

This requires some unpacking, because on its face it makes no sense.

Google DeepMind invented the Transformer architecture that makes modern large language models possible. Google has more compute than almost any organization on earth. It runs one of the most advanced AI research labs in the world. And yet, in the same week that Google announced a $40 billion investment in Anthropic, Sergey Brin sent an internal memo describing Claude's lead in agentic coding as a gap Google must "urgently bridge."

The answer is that Google is running two parallel strategies simultaneously, and it is not entirely sure which one will win.

Strategy one: build Gemini into the leading enterprise AI model. This is what the Brin memo, the DeepMind strike team, and the ongoing model development are all about. Google has not given up on competing directly with Anthropic through better models.

Strategy two: if strategy one does not fully succeed, at least benefit financially from whoever does. A $40 billion investment at a $350 billion valuation, in a company whose annualized revenue recently crossed $30 billion and whose enterprise market share is growing faster than any comparable product in history, is a reasonable financial hedge against your own product not closing the gap in time.

Both strategies are rational. The paradox is that they are being executed at the same time, by the same company, toward the same competitor. As The Next Web noted, Claude is outselling Gemini in the enterprise market that Google considers its own, and Google is responding by funding Claude's continued development.

The closest historical parallel is Microsoft's relationship with OpenAI. In 2023, Microsoft invested $13 billion in OpenAI while developing its own Copilot AI products. The dynamics were similar: large tech incumbent invests in AI challenger, uses challenger's model to accelerate its own products, and builds financial upside into the relationship. The key difference is that Microsoft never publicly circulated internal memos admitting that OpenAI's technology was beating its own. Brin's leaked memo is, in that sense, an unusual piece of transparency about what the investment is actually covering for.

The Antitrust Question Nobody Has Answered

The FTC is watching.

In 2025, the FTC published a study on AI partnerships warning that cloud-provider investments in AI developers risk "locking in the market dominance of large incumbent technology firms." Senators Elizabeth Warren and Ron Wyden sent letters probing both the Google-Anthropic and Microsoft-OpenAI investment structures over antitrust concerns, arguing that these deals represent quasi-acquisitions structured to evade traditional merger review.

The regulatory response has been inconsistent. U.S. antitrust authorities initially moved to force Google to divest its Anthropic stake, then reversed course and are now allowing Google to maintain the investment, with the ownership cap and no-board-seat structure serving as the apparent concessions.

The antitrust concern is structural. When Google both invests in Anthropic and supplies Anthropic's cloud infrastructure, the incentive to price-compete against Anthropic or to develop features that undercut Anthropic's enterprise pitch becomes complicated. If Anthropic succeeds, Google makes money on the investment. If Anthropic fails, Google loses money on the investment. The investment creates a financial interest in Anthropic's success that sits alongside a competitive interest in Gemini's success. Those two interests are not always aligned.

For enterprise customers, this creates a specific ambiguity. A company building on Google Cloud today can use Gemini Enterprise or Claude API, and Google has a financial stake in both outcomes. The vendor relationship is no longer straightforward.

Whether this constitutes an antitrust problem under current law is a live question. What is clear is that the structure of the deal, the <15% ownership cap, the absence of a board seat, and the performance-contingent second tranche, was designed by lawyers who understood the regulatory environment. Whether those structural features are sufficient to satisfy regulators who will eventually review these arrangements more thoroughly is not yet determined.

What Amazon and Google's Simultaneous Bets Mean for OpenAI

The most direct consequence of both deals is what they say about OpenAI.

Amazon Web Services and Google Cloud are the two largest cloud providers in the world. Both have just committed their most significant AI infrastructure investments to Anthropic, not OpenAI. Microsoft is OpenAI's cloud provider, and Microsoft's investment structure with OpenAI predates the current competitive environment. But Microsoft has not replicated the scale of the Amazon and Google commitments to Anthropic in any equivalent new deal with OpenAI announced in the same window.

The consequence is that Anthropic now has three things OpenAI does not: a combined $65 billion in committed anthropic funding from two cloud giants, guaranteed compute capacity from both AWS and Google Cloud covering most of the next decade, and a competitive position in enterprise AI that both investors have publicly acknowledged is stronger than their own models.

This concentration of anthropic funding in a company that is neither the oldest nor the largest AI lab is itself a market signal. Capital tends to follow momentum, and momentum in the enterprise AI market has been moving toward Claude-based products for the better part of a year.

OpenAI is not in a weak position by any conventional measure. It has $25 billion in annualized revenue, the most recognized AI brand in the world, a $852 billion private valuation, and its own infrastructure arrangements with Microsoft. But the cloud deals are a meaningful signal about where the two largest infrastructure providers think the enterprise AI market is heading. They are, effectively, voting with their balance sheets.

The deal structure also matters for each company's IPO narrative. Anthropic is reportedly targeting a listing as early as October 2026. The combination of $30 billion in ARR, $65 billion in committed investor backing, and guaranteed multi-year infrastructure from two of the three largest cloud providers is a compelling S-1 story. OpenAI's IPO narrative, by contrast, will need to explain why a consumer-heavy business with a single cloud provider and a valuation above $800 billion deserves that premium.

What Comes Next

In the short term, the $10 billion from Google begins flowing into Anthropic's operations, likely accelerating model development and enterprise sales infrastructure. The 5GW compute commitment starts coming online in 2027, which will give Anthropic materially more training and inference capacity than it has today.

The performance milestones attached to the remaining $30 billion have not been disclosed. If they track enterprise revenue, the recent anthropic funding trajectory suggests Anthropic is likely to clear them. Anthropic's annualized revenue went from $9 billion in December 2025 to $30 billion by April 2026, and its enterprise customer base (companies spending over $1 million per year) doubled from 500 to 1,000 in under two months. If the milestones are tied to model benchmarks, the outcome is less predictable, since benchmark performance depends partly on the very training capacity this funding will enable.

The deeper question is whether Anthropic's dual cloud arrangement, spanning both AWS and Google Cloud simultaneously, creates long-term infrastructure tensions between its two largest investors. Both Amazon and Google want to be Anthropic's primary computing platform. Anthropic currently uses both. At some point, if compute capacity becomes a competitive differentiator between the two cloud providers, Anthropic will face pressure to prioritize one over the other. The $100 billion spending commitment on AWS pushes toward Amazon; the 5GW TPU guarantee pushes toward Google. Anthropic is, for now, keeping both relationships in equilibrium.

For enterprises watching this play out, the practical implication is that the AI vendor landscape is consolidating around infrastructure relationships. Google Cloud customers who use Claude API are paying Google for the compute while Google also receives an investment return if Claude succeeds. AWS customers using Claude have a similar arrangement with Amazon. This convergence of financial and infrastructure interests between AI model providers and cloud platforms is not just a business story. It is a structural shift in how enterprise software will be sourced, priced, and governed for the next decade.

The Brin memo and the $40 billion investment check are the same story told from two different positions. From the product position: we are behind and need to catch up. From the financial position: we do not know if we will catch up, so we need to own a piece of whoever we are chasing. For an organization that invented Transformers and has spent billions on AI infrastructure, that combination of strategic urgency and financial hedging is the most revealing signal of how competitive and uncertain the current AI landscape actually is.

FAQ: Common Questions About Google and Anthropic's $40 Billion Investment

Why would Google invest in a company it is competing against?

Google is running two simultaneous strategies: building Gemini to compete directly with Claude, and maintaining financial exposure to Anthropic in case Gemini does not fully close the gap. The investment is a hedge. The compute commitment is also a cloud infrastructure deal, ensuring Anthropic trains and runs its models on Google Cloud. Google benefits financially if Anthropic succeeds, and keeps Anthropic as a Google Cloud customer either way.

What exactly is Google getting for $40 billion?

Google is getting equity in Anthropic at a $350 billion valuation (capped at 15% ownership, no board seat), and Anthropic is committing to use Google Cloud as its primary compute infrastructure platform. The $30 billion conditional portion depends on undisclosed performance milestones, so the actual amount could be considerably less than the headline figure.

How does Amazon's $25 billion fit into this?

Amazon announced its $25 billion commitment four days before Google, on April 20. Amazon's deal also includes compute infrastructure: approximately 1 gigawatt of Trainium chips for Anthropic by end of 2026. Anthropic committed to spend $100 billion on AWS over 10 years as part of that arrangement. Anthropic is now the primary AI partner for both of the two largest cloud platforms.

Is this deal under antitrust scrutiny?

Yes. The FTC has studied cloud-AI investment structures and flagged concerns about market lock-in. U.S. authorities initially sought to force Google to divest its Anthropic stake, then reversed course. The deal's structure, less than 15% ownership and no board seat, appears designed to stay below thresholds that would trigger mandatory merger review.

The Google-Anthropic deal raises a question that applies to any organization making decisions about AI tools right now: when your AI vendor's investor is also your cloud provider, how do you evaluate those tools independently? Understanding the competitive and financial structure behind the AI services you rely on is increasingly part of building a knowledge base for smart decision-making. The decisions teams make about which AI tools to standardize on will be shaped not just by capability comparisons but by infrastructure relationships that most users never see.

Get started for free

A local first AI Assistant w/ Personal Knowledge Management

For better AI experience,

remio only supports Windows 10+ (x64) and M-Chip Macs currently.

​Add Search Bar in Your Brain

Just Ask remio

Remember Everything

Organize Nothing

bottom of page