The Future

The AI Safety Company Is Worth $900 Billion. Sit With That.

Morgan Blake ·
The AI Safety Company Is Worth $900 Billion. Sit With That.

Anthropic, the company founded specifically because its founders believed that building powerful AI might be dangerous, is reportedly about to raise $50 billion at a valuation of $900 billion.

I have been thinking about that sentence for two days and I am not sure I have made peace with it. Not because the valuation is irrational. Because of what the sentence implies about the story we have been telling ourselves about how the AI industry works.

The $900 billion figure was reported by Bloomberg and confirmed by TechCrunch this week. It would make Anthropic the most valuable AI startup in the world, surpassing OpenAI, which closed a $122 billion round at a post-money valuation of $852 billion in February. Anthropic itself was valued at $380 billion just three months ago. The proposed round would more than double that in roughly 90 days.

Here is the revenue context that makes this less like pure faith. At the end of 2025, Anthropic's annualized revenue run rate was approximately $9 billion. By April 2026, it was $30 billion. The company tripled its revenue in roughly six months. These are the kinds of numbers that make sophisticated investors write large checks with a straight face.

So the valuation is not the part I want to examine. The part I want to examine is the founding premise.

Dario Amodei and his sister Daniela, along with several colleagues, left OpenAI in 2021 because they believed the company was not taking safety seriously enough. The founding premise of Anthropic was not "we will build more capable AI than everyone else." It was: "the most important thing anyone can do right now is to build AI carefully, because the stakes of getting this wrong are very high." The company has published research on AI alignment and interpretability. It has argued in regulatory proceedings. Last year it turned down a Department of Defense contract over concerns about autonomous weapons use. When the Pentagon designated Anthropic a supply chain risk and the Trump administration ordered agencies to offload its products, Anthropic sued. Google subsequently committed up to $40 billion to the company anyway. Now this.

This is not the behavior of a company that is papering over safety concerns to close deals faster. And yet here we are.

There are a few ways to interpret what is happening, and I think more than one of them is true simultaneously.

The first is that the safety positioning is a genuine business advantage. Enterprise buyers in finance, healthcare, and legal want an AI vendor who can tell their compliance teams and regulators something coherent about guardrails. Anthropic's Constitutional AI framework is not a complete solution to the alignment problem, but it is a coherent framework that Anthropic can articulate in procurement meetings. Claude's reputation for being less likely to produce harmful or embarrassing outputs is not incidental to its commercial success. It is probably central to it.

The second is that the model layer is going to converge on a small number of winners. If you believe — as a lot of investors clearly do — that the AI capability race will eventually produce a few foundation models that most applications will build on top of, then the question is not whether $900 billion is a lot for Anthropic today. The question is whether Anthropic is positioned to be one of those models three years from now. At $30 billion in annualized revenue and accelerating, it is a plausible bet.

The third interpretation is the most uncomfortable: these valuations are operating in a reference class that does not yet have good comparables. Amazon is worth roughly $2.3 trillion. Microsoft is worth $3.2 trillion. These are companies with decades of diversified, sticky cash flows. Anthropic has three years of operation and a product that people are paying for at rapidly increasing rates, but no one has proven out the steady-state margins. The multiple implied by $900 billion is a bet on a future state of the world, not a discounting of known cash flows.

All three are probably true at once. The safety premium is real. The winner-take-most thesis is real. And the multiple is a statement of faith in a future we have not confirmed yet.

What I keep returning to is the founder story. Dario Amodei left his previous position because he thought racing ahead was wrong. He then built the company that is now outpacing the field: in valuation, in revenue growth, in the size of the checks investors are willing to write. The lesson the industry will draw is probably not "being careful paid off." The lesson it will draw is "you can build the fastest company and call it the safest one."

Whether that is the same thing is the question the next few years will have to answer.


The About.chat newsletter covers AI developments like this every week. Subscribe here.

Enjoyed this? Get more.

Weekly dispatches on AI culture, chatbots, and the robot future. No hype.

Free. Unsubscribe anytime.

#anthropic#valuation#funding#ai-industry#opinion