OpenAI Whisper prone to hallucinations, researchers say

Researchers have discovered that Whisper, an AI-powered transcription tool used in various industries including healthcare, is prone to making up text or entire sentences, known as hallucinations. According to interviews with software engineers, developers, and academic researchers by AP, these hallucinations can include problematic content such as racial commentary, violent rhetoric, and imagined medical treatments. Experts are particularly concerned about the use of Whisper-based tools in medical settings, where inaccurate transcriptions could have serious consequences. Despite OpenAI’s warnings against using Whisper in high-risk domains, hospitals and medical centers have started utilizing the tool to transcribe patient consultations with doctors.

Related posts:

Stay up-to-date: