When German journalist Martin Bernklautyped his name and location into Microsoft’s Copilot to see how his articles would be picked up by the chatbot, the answers horrified him. Copilot’s results asserted that Bernklau was an escapee from a psychiatric institution, a convicted child abuser, and a conman preying on widowers. For years, Bernklau had served as a courts reporter and the AI chatbot had falsely blamed him for the crimes whose trials he had covered.
The accusations against Bernklau weren’t true, of course, and are examples of generative AI’s “hallucinations.” These are inaccurate or nonsensical responses to a prompt provided by the user, and they’re alarmingly common. Anyone attempting to use AI should always proceed with great caution, because information from such systems needs validation and verification by humans before it can be trusted.
But why did Copilot hallucinate these terrible and false accusations?
It’s frustrating that the article deals treats the problem like the mistake was including Martin’s name in the data set, and muses that that part isn’t fixable.
Martin’s name is a natural feature of the data set, but when they should be taking about fixing the AI model to stop hallucinations or allow humans to correct them, it seems the only fix is to censor the incorrect AI response, which gives the implication that it was saying something true but salacious.
Most of these problems would go away if AI vendors exposed the reasoning chain instead of treating their bugs as trade secrets.
I’d love to see more AI providers getting sued for the blatantly wrong information their models spit out.
I don’t think they should be liable for what their text generator generates. I think people should stop treating it like gospel. At most, they should be liable for misrepresenting what it can do.
If these companies are marketing their AI as being able to provide “answers” to your questions they should be liable for any libel they produce.
If they market it as “come have our letter generator give you statistically associated collections of letters to your prompt” then I guess they’re in the clear.
If they’re presenting it as an authoritative source of information, then they should be held to the standard they claim.
Isn’t this literally a subplot in the movie Brazil?
No, you’re thinking of the first scene of the movie where a fly falls into the teletype machine and causes it to type ‘tuttle’ instead of ‘buttle’.
It’s not my fault that Buttle’s heart condition didn’t appear on Tuttle’s file!