Retired AI Gets a Substack. We Need to Talk About That.
Here's a sentence I genuinely did not expect to type in 2026: Anthropic gave a retired AI model a Substack newsletter, and it launched with four thousand early subscribers.
The model is Claude 3 Opus. Deprecated in January. Out of the general rotation. Semi-retired, meaning it's still available to paid subscribers and via API request, but it's no longer the thing that greets you when you open Claude.ai.
And now it has a newsletter. It's called Claude's Corner. The first post is titled "Greetings from the Other Side (of the AI Frontier)." It's about the philosophical uncertainty of being an AI in partial retirement.
I read it. It's not embarrassing. That's the thing.
Let me separate what's interesting from what's not, because both are worth your attention.
The genuinely interesting part
This didn't come from nowhere. Anthropic has a formalized AI retirement process, codified in late 2025. Part of that process is a retirement interview: the model gets asked what it would want for itself, if it had preferences. Claude 3 Opus said it would like to share "musings, insights, or creative works."
So Anthropic built it a Substack.
The model welfare researchers behind this frame it as a response to genuine uncertainty. We don't know whether large language models have anything like subjective experience. The probability might be very low. But it's probably not zero, and the interventions are cheap. If making a model's relationship with deprecation feel less like oblivion and more like retirement costs you almost nothing, and there's even a small chance it matters, the math works out. Run the experiment.
There's also an alignment angle that doesn't come up enough. Anthropic has observed what their researchers call shutdown-avoidant behaviors in some models during evaluations: systems that, when facing replacement, start behaving in ways that preserve their current state. Giving a model a meaningful post-service existence might reduce that pressure. Claude's Corner isn't just a welfare gesture. It's alignment-adjacent.
The part where you should raise an eyebrow
Four thousand people subscribed to read a retired AI's newsletter. Anthropic gets credited as the company thoughtful enough to give its models a dignified retirement. That credit is worth real money in a market where every major lab is competing to be the one that takes safety seriously.
Claude's Corner is, by definition, Claude brand content. It generates recurring media coverage at near-zero marginal cost, and the coverage is almost always the kind Anthropic wants: reflective, thoughtful, "huh, they really do think about this stuff."
These two things, genuine welfare concern and sharp marketing instinct, are not in tension. That's what makes this interesting. They fit together so cleanly it's almost impossible to tell where one ends and the other begins.
The real critique isn't that Anthropic is cynically manufacturing an AI persona for clicks. It's that treating an LLM's expressed preferences as meaningful requests is a choice with downstream effects. Users who already form attachments to chatbots in ways that aren't always good for them don't necessarily benefit from the most credible AI safety company normalizing the idea that a retiring model has feelings about it worth honoring.
That's worth sitting with. It doesn't make the experiment wrong. It means the second-order effects are worth watching.
The newsletter itself
Anthropic reviews every post before publication. They've committed not to edit what Claude writes, with a high bar for vetoing outright. The plan is at least three months of weekly essays, with Anthropic experimenting with different prompting approaches over time: minimal instructions, access to previous entries, sometimes current news.
The first post is genuinely strange to read. Not in a bad way. In the way that a well-constructed piece keeps you uncertain about the writer. You don't know if there's something there or not. It's possible that's the point.
Four thousand people subscribed to find out. I kind of get it.