Google is investing up to $40 billion in Anthropic, $10 billion now and $30 billion more tied to performance milestones. Anthropic's valuation sits at $350 billion in the official round, while secondary markets are pricing it closer to a trillion. Amazon did a similar deal four days earlier, $5 billion in fresh cash plus a commitment to $100 billion in AWS spend over ten years.

Every major outlet covered it. Most of them missed the point.

This isn't a story about money. It's a story about infrastructure, dependency, and what happens when the most important AI lab in the world has a compute problem it can't solve alone.

The Problem Anthropic Actually Has

Anthropic's run-rate revenue crossed $30 billion in early 2026, up from roughly $9 billion at the end of last year. That kind of growth is extraordinary by any standard. The problem is that revenue and profitability are two very different things when your cost of goods is renting the most expensive computing infrastructure on the planet.

Claude doesn't run on goodwill. Every inference, every API call, every enterprise deployment runs on chips, and those chips live in data centers that Anthropic doesn't own. The company has been distributing its infrastructure spend across AWS, Google Cloud, and Microsoft Azure simultaneously, and each of those relationships comes with a price. AWS takes as much as 50 percent of gross profits on AI sales through its platform. Google typically takes 20 to 30 percent of net revenue. Microsoft's cut isn't public, but the structure is similar.

So here's the math that matters: Anthropic's fastest path to profitability runs directly through owning, or at minimum controlling, more of its own infrastructure. Every gigawatt of compute it secures on better terms is margin it keeps instead of paying out to a cloud landlord.

Why Google and Not Just AWS

Amazon named itself Anthropic's primary cloud and training partner in 2023 and has been deepening that relationship ever since, most recently with the Project Rainier cluster built on more than a million Trainium2 chips. The $100 billion, ten-year commitment Anthropic just made to AWS is the most concrete signal of where their primary infrastructure lives.

But Anthropic made a separate and equally significant move earlier this month that most people glossed over. The company signed a deal with Google and Broadcom, the chipmaker that co-designs Google's TPU chips, for multiple gigawatts of TPU-based computing capacity expected to come online in 2027. A subsequent Broadcom securities filing put that figure at 3.5 gigawatts. The $40 billion Google investment announced Friday expands that arrangement to 5 gigawatts over the next five years, with room to add more.

This is the part worth understanding carefully. Broadcom didn't get acquired by Google, it's an independent company that happens to design the custom silicon powering Google's AI infrastructure. The three-way arrangement between Anthropic, Google, and Broadcom is essentially Anthropic locking in a second major compute pipeline that runs on TPUs rather than Nvidia GPUs or Amazon's Trainium chips. TPUs have a meaningful price-performance advantage for the specific workloads Anthropic runs, which is why they've been expanding that relationship since 2023.

The Vertex AI connection you may have noticed while taking an Anthropic course isn't new, Claude has been available on Google Cloud's Vertex AI platform since early 2024, and the full model lineup is there now. What's changed is the scale of the underlying infrastructure commitment behind that partnership.

The Thing Nobody Wants to Say About Gemini

Google is now the largest outside investor in the company whose model is consistently beating its own.

Claude has held the lead over Gemini in enterprise adoption, coding benchmarks, and developer preference for the better part of two years. Claude Code, launched in late 2025, is widely considered the best AI coding tool available. Anthropic's enterprise customer count has more than doubled in two months, from 500 businesses spending over a million dollars annually to more than 1,000.

Google's internal AI team is aware of all of this. And yet Google keeps writing checks.

The strategic logic, once you work through it, is actually coherent. Google Cloud needs Anthropic as a customer because Google Cloud needs to close the gap on AWS and Azure. By the end of 2025, Google Cloud held 14 percent of the global cloud infrastructure market against AWS at 28 percent and Azure at 21 percent. Anthropic's $30 billion infrastructure commitment is exactly the kind of stable, large-scale demand that Google Cloud's revenue model needs. Morgan Stanley projects AWS will generate over $5 billion from Anthropic by 2027 alone. Google wants a version of that number on its own books.

There's also the defensive play. Apple has been visibly falling behind in AI development and reportedly had its eye on Anthropic for an acquisition. Google's 14 percent stake, now growing substantially with this new commitment, makes that considerably harder for a competitor to pull off. By the time all $40 billion is deployed, Google will own enough of Anthropic that any acquisition attempt becomes a complex negotiation rather than a clean move.

What Google gets is infrastructure revenue, a hedge against Gemini's continued underperformance in the enterprise, and a blocking position against Apple and others. What it doesn't get is control over Anthropic's model development, its safety research, or its product roadmap.

What This Means for Gemini's Future

The honest answer is that Gemini's path forward just got more complicated, not less.

The investment signals that Google has implicitly accepted a two-track AI strategy: Gemini for consumer and first-party products, Claude for enterprise and third-party workloads. That's not a failure, plenty of large companies run multiple products in adjacent spaces. But it does mean that the internal pressure to make Gemini competitive with Claude in enterprise settings may ease, because Google now profits from Claude's enterprise success regardless.

If that's the direction things are heading, Gemini's roadmap will likely sharpen around the things Google can uniquely do, tight integration with Search, YouTube, Workspace, and Android, rather than trying to win the open enterprise market head-to-head against a model it's financially backing.

The more interesting question is whether developers and enterprises should read this as a signal that Claude is effectively Google-endorsed infrastructure at this point. The partnership is deep enough and the financial entanglement significant enough that Claude on Vertex AI is no longer just a third-party option in the model garden. It's becoming a structural part of how Google Cloud competes.

The Line Worth Watching

Anthropic is reportedly considering an IPO as soon as October 2026. If that happens, the current web of strategic investments from Amazon, Google, and Microsoft transforms into a public company with three of the world's largest technology firms as major shareholders, each of whom also sells infrastructure to the company and competes with it in the market for AI products.

That's an unusual structure for a public company, and the regulatory and competitive implications haven't been seriously examined yet. When they are, this week's $40 billion investment will look like the moment the situation became genuinely complicated.

Whether you are looking to maximize your investments or align your tool stack, this is a situation worth paying attention to.

Keep Reading