Why Every AI Company Is Terrified of the Word 'Hallucination'
Notice how AI companies have quietly stopped using the word "hallucination"? OpenAI now prefers "confabulation" or just "mistakes." Anthropic talks about "honest uncertainty." Google says "inaccuracies."
The word "hallucination" is marketing poison. It implies the AI is seeing things that are not there — that it is, in some fundamental way, unreliable. And that is a hard sell when you are charging $20 a month for a product built on trust.
But here is the uncomfortable truth: the models do hallucinate. They generate plausible-sounding nonsense with complete confidence. And no amount of rebranding changes the underlying behavior. The fix is not better terminology — it is better models, better retrieval, and more honest communication about what these systems can and cannot do.
Until then, enjoy the linguistic gymnastics.