Papers
arxiv:2509.04664

Why Language Models Hallucinate

Published on Sep 4
· Submitted by taesiri on Sep 8
#1 Paper of the day
Authors:
,
,
,

Abstract

Language models produce incorrect statements due to training and evaluation procedures that reward guessing over acknowledging uncertainty, leading to a need for socio-technical changes in benchmark scoring.

AI-generated summary

Like students facing hard exam questions, large language models sometimes guess when uncertain, producing plausible yet incorrect statements instead of admitting uncertainty. Such "hallucinations" persist even in state-of-the-art systems and undermine trust. We argue that language models hallucinate because the training and evaluation procedures reward guessing over acknowledging uncertainty, and we analyze the statistical causes of hallucinations in the modern training pipeline. Hallucinations need not be mysterious -- they originate simply as errors in binary classification. If incorrect statements cannot be distinguished from facts, then hallucinations in pretrained language models will arise through natural statistical pressures. We then argue that hallucinations persist due to the way most evaluations are graded -- language models are optimized to be good test-takers, and guessing when uncertain improves test performance. This "epidemic" of penalizing uncertain responses can only be addressed through a socio-technical mitigation: modifying the scoring of existing benchmarks that are misaligned but dominate leaderboards, rather than introducing additional hallucination evaluations. This change may steer the field toward more trustworthy AI systems.

Community

Paper submitter

Like students facing hard exam questions, large language models sometimes guess when uncertain, producing plausible yet incorrect statements instead of admitting uncertainty. Such "hallucinations" persist even in state-of-the-art systems and undermine trust. We argue that language models hallucinate because the training and evaluation procedures reward guessing over acknowledging uncertainty, and we analyze the statistical causes of hallucinations in the modern training pipeline. Hallucinations need not be mysterious -- they originate simply as errors in binary classification. If incorrect statements cannot be distinguished from facts, then hallucinations in pretrained language models will arise through natural statistical pressures. We then argue that hallucinations persist due to the way most evaluations are graded -- language models are optimized to be good test-takers, and guessing when uncertain improves test performance. This "epidemic" of penalizing uncertain responses can only be addressed through a socio-technical mitigation: modifying the scoring of existing benchmarks that are misaligned but dominate leaderboards, rather than introducing additional hallucination evaluations. This change may steer the field toward more trustworthy AI systems.

Whilst this may serve a powerful purpose we should be careful as to what stage it is deployed. For it may be a philosophical dead end. The idea of "Certainty-Constrained Learning" is an oxymoron.

Uncertainty is the precursor of Certainty. You cannot be certain without first being uncertain. Learning is the transition from uncertainty to certainty. "Hallucination" is the path. Without that path, there can be no learning.

If we constrain a student to only answer when he is certain then he will forever be uncertain.

I believe "hallucination" is not something to eradicate, for it is the very thing that enables learning in the first place.

I think we need to think very carefully about where, when, and why, to deploy this method. That alone would be a paper in its own right.

That's not to detract from the research, which is solid. Just something to think about.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2509.04664 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2509.04664 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2509.04664 in a Space README.md to link it from this page.

Collections including this paper 5