ChatGPT falsely claims you are a child killer, and you want to stop it? Come on, GDPR

An Norwegian man was shocked to learn that ChatGPT had falsely claimed he had murdered his two sons, and attempted to kill a third. The conversation included real details of his personal life. Privacy lawyers now claim that this mix of fact and fiction violates GDPR rules.

Austrian nonprofit None Of Your Business (noyb), filed a complaint against OpenAI [PDF] to Norway’s Data Protection Authority on Thursday, accusing Microsoft-backed superlab of violating Europe’s General Data Protection Regulations (GDPR) Article 5 (19459032). The filing claims ChatGPT falsely depicted Arve Hjalmar as a child killer in its output while mixing in accurate details such as Arve’s hometown and number and gender of children. Personal data must be accurate no matter how they are processed, according to the rules.

“The GDPR is clear,” saidnoyb data-protection lawyer Joakim Soderberg. “Personal data has to be accurate. And if it’s not, users have the right to have it changed to reflect the truth.”

Correcting false information is harder than it seems, as noyb has previously argued. Max Schrems, privacy activistled the group that filed a similar complaint last year against OpenAI, claiming it was impossible to correct false personal data in ChatGPT outputs.

Noyb stated in its statement about the latest complaint that OpenAI had previously claimed it could not correct false data generated by the model, which is generated using statistics and a random element. The design of today’s large generative neural networks makes it inevitable that things will go wrong.

According to the lab, it can only “block” specific data using a filter on the output or input when certain prompts are used. This leaves the system open to spitting incorrect information. Noyb argues that under GDPR it doesn’t matter if bad output makes it to the public, or not. False information violates Article 5 accuracy requirements.

OpenAI tried to sidestep their obligations by adding a warning that the tool “can make mistakes”Noyb added. But he argues that this doesn’t let the multi-billion dollar business off the hook. Soderberg said that showing ChatGPT users just a small disclaimer about the chatbot’s ability to make mistakes is not enough. “You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.”

It’s not the first time OpenAI is accused of defamation. A Georgia resident sued the outfit in 2023, after ChatGPT incorrectly () told a reporter that he had embezzled funds from a gun-rights group. ChatGPT also got into trouble in Australia after it falsely connected a mayor with a foreign bribery scam . The US Federal Trade Commission launched an investigation into OpenAI in the same year. They were looking at how they handled personal data, and if there were any violations of consumer protection laws. OpenAI wants Uncle Sam to let them scrape everything and stop other countries from complaining. But the AI giant might already have a partial solution.

AI companies cannot ‘hide’ false information from users while they still process false information internally. AI companies cannot ‘hide false information’ from users, while processing false information internally

Although the date of Holmen’s defamatory chat with ChatGPT was redacted in the lawsuit, the document notes it took place before OpenAI released ChatGPT models that could search the live web in October 2024.

“ChatGPT now also searches the internet for information about people, when it is asked who they are,” Noyb said. “For Arve Hjalmar Holmen, this luckily means that ChatGPT has stopped telling lies about him.”

However, the complaint notes that there is still a link to the original conversation, indicating that the false information is still within OpenAI’s system. Noyb claims that the data could have been used to train the models. This would mean that the inaccuracies continue behind the scenes even if the users are no longer aware of them, thus keeping the alleged GDPR violations relevant.

“AI companies can also not just ‘hide’ false information from users while they internally still process false information,” said noyb data protection lawyer Kleanthi Sardeli. The Register contacted OpenAI

for comment. (r)

www.aiobserver.co

More from this stream

Recomended