OpenAI’s Whisper transcription tool has some optical illusions, researchers say

Software developers, engineers, and academic researchers have serious concerns about leaks from OpenAI’s Whisper, according to an Associated Press report.
While there’s no shortage of discussion about AI’s propensity to compose — in fact, to do things — it’s surprising that this is an issue in transcription, where you’d expect transcription to closely follow audio.
Instead, researchers told the AP that Whisper introduced everything from racial commentary to hypothetical treatments into documents. And that can be especially disastrous as Whisper is found in hospitals and other medical settings.
A University of Michigan researcher studying public meetings found false positives in eight out of 10 audio recordings. A machine learning engineer read more than 100 hours of Whisper transcripts and found more than half of it false positives. And the developer reported finding false positives in nearly all of the 26,000 posts he created with Whisper.
An OpenAI spokesperson said the company is “continually working to improve the accuracy of our models, including reducing false positives” and noted that its usage policies prohibit using Whisper “in certain high-level decision-making situations.”
“We thank the researchers for sharing their findings,” they said.
Source link