Dad demands OpenAI remove ChatGPT false claim that Holmen murdered his children

At present, ChatGPT doesn’t repeat these horrible false statements about Holmen in outputs. Noyb “ChatGPT now also searches the Internet for information about people, when it is asked who they are,” said that a more recent update fixed the problem. OpenAI claimed that it could only block information, not correct it. However, the fake story of a child murderer is likely still in ChatGPT internal data because OpenAI previously argued this. Noyb says that unless Holmen corrects it, this is a violation of GDPR. Noyb says.

OpenAI might not be able easily to delete the data.

Holmen’s not the only ChatGPT users who have worried that the hallucinations of the chatbot could ruin lives. A chatbot falsely claimed that an Australian mayor had been sent to prison months after ChatGPT was launched in late 2022. The mayor threatened to sue the chatbot for defamation. Around the same period, ChatGPT connected a real law professioanl to a fake sexual misconduct scandal, The Washington Post. reported. A few months after, a radio presenter sued OpenAI for ChatGPT outputs that described fake embezzlement accusations.

Noyb suggested that OpenAI may have filtered some models to avoid producing harmful outputs, but did not delete the false data from the training data. Kleanthi Sardeli is a Noyb data lawyer who argues that filtering outputs and posting disclaimers won’t be enough to protect your reputation.

“Adding a disclaimer that you do not comply with the law does not make the law go away,” Sardeli stated. “AI companies can also not just ‘hide’ false information from users while they internally still process false information. AI companies should stop acting as if the GDPR does not apply to them, when it clearly does. If hallucinations are not stopped, people can easily suffer reputational damage.”

www.aiobserver.co

More from this stream

Recomended