OpenAI explains why AI models are rewarded for inventing facts
Large language models like ChatGPT sometimes generate false information (“hallucinations”) because their evaluation systems reward guessing over admitting uncertainty. In an official post, the company OpenAI reports that this incentive structure is a fundamental challenge for all current AI models. Hallucinations can occur even with seemingly simple questions. For example, a chatbot gave three different …