AI News

OpenAI's new reasoning AI models hallucinate more

OpenAI's new o3 and o4-mini AI models, while state-of-the-art, exhibit higher hallucination rates compared to older OpenAI models, a persistent problem in AI. Internal tests reveal these re…

OpenAI's new reasoning AI models hallucinate more

Apr 27, 2025

OpenAI's new reasoning AI models hallucinate more

OpenAI's new o3 and o4-mini AI models, while state-of-the-art, exhibit higher hallucination rates compared to older OpenAI models, a persistent problem in AI. Internal tests reveal these re…

OpenAI's new o3 and o4-mini AI models, while state-of-the-art, exhibit higher hallucination rates compared to older OpenAI models, a persistent problem in AI. Internal tests reveal these reasoning models generate more inaccurate claims, despite improved performance in coding and math.

Third-party testing confirms o3's tendency to fabricate actions, potentially diminishing its overall utility. OpenAI acknowledges the issue, attributing it to the reinforcement learning used in o-series models, and is actively researching solutions. While hallucinations can foster creativity, they hinder adoption in industries demanding accuracy, highlighting the need for solutions like web search integration to improve reliability.