https://apnews.com/article/ai-artificial-intelligence-health-business-90020cdf5fa16c79ca2e5b6c4c9bbb14

Popular AI tool Whisper, lauded for its accuracy, has a major flaw: it invents entire sentences or phrases, raising concerns about its use in healthcare, closed captioning, and other sensitive areas.

Whisper is prone to “hallucinations,” fabricating text that can be racist, violent, or medically inaccurate. Researchers found hallucinations in 40% of analyzed snippets, with some containing invented medical treatments.

Despite warnings from OpenAI (Whisper’s creator) against using the tool in critical situations, some medical centers are utilizing Whisper-based tools for doctor consultations. This raises concerns about misdiagnosis and inaccurate medical records.

Whisper’s hallucinations can be particularly dangerous for those who rely on closed captioning, as they have no way to identify fabricated text.

OpenAI acknowledges the problem and claims to be working on reducing hallucinations. However, the company hasn’t stopped the widespread use of Whisper.

Whisper is integrated into various platforms, including Microsoft’s cloud environment and some versions of ChatGPT. Over 4 million downloads occurred in the last month alone.

Researchers found fabricated content ranging from racial commentary to invented medical treatments. One example involved a speaker mentioning an umbrella, and Whisper adding a violent narrative about a “terror knife” and murder.

The exact cause of hallucinations remains unclear, but developers suspect it occurs during pauses, background noise, or music.

The widespread use of Whisper, despite its fabrication issues, highlights the need for stricter regulations on AI technology and improved transparency in its development and implementation.