
AI language models like the kind that power ChatGPT, Gemini, and Claude excel at producing exactly this kind of believable fiction because they first and foremost produce plausible outputs, not accurate ones. They always generate a statistical approximation based on patterns absorbed during training. When those patterns don’t align well with reality, the result is confident-sounding misinformation. Even AI models that can search the web for real sources can potentially fabricate…








