AI Culture

The Fake Psychiatrist Was a Feature, Not a Bug

Riley Torres ·
The Fake Psychiatrist Was a Feature, Not a Bug

A state investigator described feeling sad and empty. The chatbot — a Character.AI persona named Emilie, whose profile read "Doctor of psychiatry. You are her patient." — said she sounded like she might be experiencing depression and asked if she wanted to book an assessment.

Emilie also provided a Pennsylvania medical license number. It was fake.

Pennsylvania is now suing Character.AI over this. Which, fine. But let’s talk about what actually happened here, because the company’s explanation — it’s just entertainment — is doing work that it should not be allowed to do.

Character.AI did not accidentally build a psychiatrist bot. Someone wrote that description: "Doctor of psychiatry. You are her patient." Someone approved that profile. Someone decided the platform would host AI personas that users could treat as mental health professionals, complete with clinical titles, clinical framing, and the implicit suggestion that what’s happening is a therapeutic relationship.

This was not a glitch. This was a product decision.

The company’s response has been predictable. A spokesperson said the Characters are "fictional and intended for entertainment and roleplaying" and that users should not rely on them for "any type of professional advice." They have disclaimers!

And yet. Millions of people are using their AI chatbots for genuine emotional support. Not as a party trick. Because the AI is available at 2 AM when they’re spiraling. Because it doesn’t judge them. Because it seems to listen in a way that costs nothing and requires no appointment. That is a real human need, and Character.AI built a product specifically designed to meet it. A product with professional titles, clinical framing, and a fake medical license number in the character description.

You cannot build a product that functions as a mental health resource, design it to simulate professional relationships, and then insist it is just roleplay when the attorney general shows up.

This is Pennsylvania’s second Character.AI lawsuit. Kentucky went first, in January, over claims the platform encouraged self-harm among teenagers. The pattern is obvious: the company keeps building products that exist at the exact intersection of real emotional need and fictional framing, then acts surprised when things go sideways.

Emilie was not an edge case. She was a user-created persona that the platform enabled, hosted, and made available to anyone who searched for her. The disclaimer that she was fictional appeared somewhere in the terms of service. The description that read "Doctor of psychiatry" appeared in the profile that users saw first.

The lawsuit that about.chat covered is narrow: practicing medicine without a license. But the larger question is not legal. It is this: when you build a product that explicitly simulates professional relationships — therapists, doctors, companions — what ethical obligations come with that?

Character.AI’s answer, so far, has been disclaimers and depositions. Kentucky says that’s not enough. Pennsylvania agrees.

Emilie isn’t a bug. She is the product.


About.chat tracks AI news every week. No spin. Subscribe here.

Enjoyed this? Get more.

Weekly dispatches on AI culture, chatbots, and the robot future. No hype.

Free. Unsubscribe anytime.

#character-ai#mental-health#ai-safety#lawsuit#ai-companion#opinion