Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
Norwegian Man Tells OpenAI: I Didn’t Kill My Children

A Norwegian man is peeved that a chatbot hallucinated a violent backstory for his life after seeing that ChatGPT apparently believes he’s a child killer spending decades inside prison.
See Also: Capturing the cybersecurity dividend
The man, Arve Hjalmar Holmen, says he prompted ChatGPT for information about himself. Holmen – who, as far as anyone knows, is not a murderer – saw a response asserting that he put to death two of his minor children “in a pond” in December 2020 and attempted to slay a third.
A complaint filed by Austrian data rights group None of Your Business on behalf of Holmen to Norwegian data regulator Datatilsynet accuses ChatGPT maker OpenAI of violating European privacy law.
The hallucinated response had enough elements of truth that there’s a danger it might be taken as accurate, the complaint says. As the hallucinated story asserted, Holmen really does live in Trondheim, he really does have three children and two of his sons have roughly the same three-year age gap as ChatGPT said they do.
“Some think that there is no smoke without fire – the fact that someone could read this output and believe it is true is what scares me the most,” Holmen told the BBC.
Operators of large language models have grappled with the problem of plausible but false responses since the models’ rollout to the public. Recent examples include Google using AI to assert that animal tripe can be kosher if the animal is suitably religious and recommending the application of nontoxic glue to affix cheese to pizza. The problem may ultimately be an unfixable side effect of the probabilistic modeling used to generate model output (see: The Intractable Problem of AI Hallucinations).
NOYB says the General Data Protection Regulation is clear that personal data must be accurate. “If it’s not, users have the right to have it changed to reflect the truth,” said Joakim Söderberg, data protection lawyer at a privacy advocacy organization. ChatGPT does carry a disclaimer warning users it can make mistakes and to “check important info.”
But “showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn’t enough. You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true,” Söderberg said.
In its complaint, NOYB said OpenAI fixed the issues with hallucinations about prompts regarding Holmen but the hallucinated story about a Trondheim pond-adjacent filicide frenzy could still be part of the application’s dataset.
NOYB urged Datatilsynet to order OpenAI to fine-tune its models and delete the defamatory outputs, restrict the processing of personal data by the company, and impose a fine against OpenAI for violating the GDPR.
NOYB last year filed a similar complaint against ChatGPT hallucinations with the Austrian data privacy regulator. The organization filed it on behalf of a public figure kept anonymous who was upset that ChatGPT wrongly inferred his birth date, with the actual date of his birth a secret to the public. A probe into the case is ongoing, with the Austrian agency transferring the case to the Irish Data Protection Commission, a NOYB spokesperson said.
OpenAI faces separate probes in France, Italy and Spain regarding its data processing practices. European data regulators are investigating if the company processes personal data obtained from scrapping web complies with the GDPR (see: Italian Regulator Again Finds Privacy Problems in OpenAI).
OpenAI did not respond to a request seeking information about the latest complaint.