OpenAI’s reasoning models show increased hallucination rates
OpenAI’s new reasoning AI models, o3 and o4-mini, hallucinate more frequently than their predecessors, according to internal testing. Maxwell Zeff from TechCrunch reports that o3 hallucinated in 33% of questions on OpenAI’s PersonQA benchmark, approximately double the rate of previous models. The o4-mini performed even worse, with a 48% hallucination rate. OpenAI acknowledged in its …