ChatGPT is sued for privacy over defamatory hallucinations.
OpenAI is facing a new privacy complaint in Europe, this time over its AI chatbot that has a tendency to hallucinate incorrect information. This one could be difficult for regulators not to notice. Privacy rights advocacy group Noyb (19459032) is supporting a Norwegian who was horrified when ChatGPT returned false information claiming he had been convicted of murdering two children and trying to kill a third.
Previous privacy complaints about ChatGPT producing incorrect personal data involved issues such an incorrect birthdate or biographical details. OpenAI doesn’t offer individuals a way to correct inaccurate information that the AI generates. OpenAI usually offers to block responses to such prompts. The General Data Protection Regulation of the European Union (GDPR) gives Europeans a set of data access rights, including a right to rectify personal data.
This data protection law also requires data controllers make sure the personal data that they produce about individuals are accurate. Noyb has raised this concern in its latest ChatGPT complaint.
The GDPR is clear. Joakim Soderberg is a data protection lawyer with Noyb. He said that personal data must be accurate. “If they are not accurate, the users have the right for them to be changed to reflect the truth. It’s not enough to show ChatGPT users that the chatbot is capable of making mistakes. You can’t simply spread false information, and then add a small disclaimer that says everything you said might not be true.
The enforcement could also force AI products to change. OpenAI, for instance, made changes to its information disclosure to users after an early GDPR intervention from Italy’s data protection regulator that temporarily blocked ChatGPT in the country during spring 2023. The watchdog then fined OpenAI EUR15,000,000 because it had processed people’s data illegally.
It’s fair to say, however, that privacy watchdogs in Europe have taken a more cautious stance towards GenAI since then as they try to determine how to best apply the GDPR for these buzzy AI-based tools.
Ireland’s Data Protection Commission, which is the lead GDPR enforcement agency on a Noyb ChatGPT case, warned against banning GenAI-based tools two years ago. This suggests that regulators take their time to understand how the law is applied.
It’s also notable that the data protection watchdog in Poland has been investigating a complaint about ChatGPT since September 2023, but still no decision has been made. Noyb’s latest ChatGPT complaint appears to be intended to wake up privacy regulators to the dangers associated with AIs that can create hallucinations.
TechCrunch shared a screenshot (below) of an interaction with ChatGPT, in which the AI falsely claims that Arve Hjalmar was convicted of child murder and sentenced for 21 years in prison after killing two of his sons.
Although the defamatory claim about Hjalmar being a child killer is false, Noyb points out that ChatGPT’s answer does contain some truths since the individual has three children. The chatbot correctly identified the genders of Holmen’s children. His home town was also correctly named. It’s just that the AI fabricated such horrific lies on top that makes it even more bizarre and unsettling.
Noyb spokesperson said they couldn’t determine why the chatbot created such a specific but false history for this person. The spokesperson stated that they had done research to ensure this wasn’t a mistake with another person. They also noted that they had looked through newspaper archives, but were unable to find a reason why the AI fabricated slaying of a child.
Since large language models, such as the one that underpins ChatGPT, essentially predict the next word on a massive scale we could speculate that the datasets used to teach the tool contained many stories of filicide which influenced the words chosen in response to a question about a named person.
Regardless of the explanation, such outputs are unacceptable.
Noyb also claims that the outputs are illegal under EU data protection laws. OpenAI displays a tiny disclaimer on the bottom of the screen which states “ChatGPT may make mistakes.” It says that this does not absolve the AI developer from its duty under GDPR to avoid making egregious lies about people. OpenAI was contacted to get a response on the complaint. Noyb says that while this GDPR complaint is about a specific individual, there are other instances where ChatGPT has fabricated legally compromising information. For example, the Australian Major who claimed to be involved in a bribery scandal and the German Journalist who was falsely labelled as a child molester – indicating that this is not an isolated issue. Noyb notes that after an update to the AI model that powers ChatGPT, the chatbot has stopped producing dangerous falsehoods. This is due to the fact that the tool now searches the internet to find information about people, when asked who they were (whereas, previously, a blank could have encouraged it, presumably to hallucinate a response so wildly incorrect).
In a test we conducted, ChatGPT responded to our question “Who is Arve Holmen?” with an odd combination of photos and text. The images were sourced from Instagram, SoundCloud, Discogs as well as other sites. ChatGPT responded with a second response identifying Arve Hjalmar as “a Norwegian singer and songwriter,” whose albums included “Honky Tonk Inferno.”
“Adding an disclaimer to say that you don’t comply with the laws does not make them go away,” Cleanh Sardily side in a Noyb statment. “AI companies cannot ‘hide,’ false information while processing it internally,”
she added. “AI companies must stop acting like the GDPR doesn’t apply to them when it does,” she concluded. “If hallucinations do not stop, people can easily suffer from reputational damage.”
Noyb filed the complaint against OpenAI at the Norwegian data protection authority. It hopes the watchdog will determine it is competent to conduct an investigation, as oyb targets the complaint towards OpenAI’s U.S. office, arguing that its Ireland office isn’t solely responsible for decisions affecting Europeans.
An earlier Noyb-backed complaint against OpenAI was filed in Austria on April 2024. The regulator referred the complaint to Ireland’s DPC because OpenAI had changed its ChatGPT service provider to regional users earlier that year.
What happened to that complaint? Still sitting on an Irish desk.
When asked for an update, Risteard Byrne told TechCrunch that the DPC had formally handled the complaint after receiving it from the Austrian Supervisory Authority on September 20, 2024.
Byrne did not give any indication of when the DPC investigation into ChatGPT’s hallucinations will be completed.