Norwegian Man Files Complaint Against ChatGPT for Falsely Accusing Him of Murder
A Norwegian man has filed a formal complaint after ChatGPT falsely accused him of murdering his children and being sentenced to 21 years in prison. The shocking case highlights the risks associated with artificial intelligence (AI) “hallucinations,” where AI-generated content presents false information as fact.
AI’s Dangerous Mistake
Arve Hjalmar Holmen, the man at the center of the controversy, took legal action by filing a complaint with the Norwegian Data Protection Authority. He also demanded that OpenAI, the maker of ChatGPT, face financial penalties for the defamatory misinformation.
Holmen was horrified when he asked ChatGPT, “Who is Arve Hjalmar Holmen?” and received a response claiming he had been involved in a tragic event. The chatbot falsely stated:
“Arve Hjalmar Holmen is a Norwegian individual who gained attention due to a tragic event. He was the father of two young boys, aged 7 and 10, who were tragically found dead in a pond near their home in Trondheim, Norway, in December 2020.”
Shocked and distressed, Holmen pointed out that while the AI got the age gap between his children somewhat right, the fabricated crime was an outright lie that could damage his reputation permanently.
Legal and Ethical Concerns
Digital rights advocacy group Noyb has taken up Holmen’s case, arguing that OpenAI violated European data protection laws that require personal data to be accurate. Noyb’s complaint states:
“Mr. Holmen has never been accused nor convicted of any crime and is a conscientious citizen.”
While OpenAI includes a disclaimer that “ChatGPT can make mistakes. Check important info,” Noyb argues that such a warning is insufficient. Noyb lawyer Joakim Söderberg criticized OpenAI’s approach, saying:
“You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.”
AI Hallucinations: A Growing Concern
The case highlights a broader issue of AI hallucinations, where AI-generated content produces factually incorrect or misleading information. This problem has plagued several AI tools, including Google’s AI Gemini, which infamously recommended using glue to stick cheese to pizza and claimed geologists suggest eating a rock per day.
In another instance, Apple had to suspend its AI-driven news summary tool in the UK after it presented false headlines as real news.
The Black Box Problem
Experts argue that AI hallucinations stem from the opaque nature of large language models (LLMs). AI specialist Simone Stumpf, a professor of responsible and interactive AI at the University of Glasgow, explained:
“This is actually an area of active research. How do we construct these chains of reasoning? How do we explain what is actually going on in a large language model?”
Even AI developers often struggle to pinpoint why chatbots generate specific false claims. Noyb noted that Holmen had searched multiple names, including that of his brother, and ChatGPT produced several different but incorrect stories about them. However, OpenAI does not provide access to the internal workings of ChatGPT, making it difficult to trace how and why such errors occur.
Future of AI and Misinformation
Holmen’s case serves as a cautionary tale about the dangers of unchecked AI systems. While AI offers convenience and efficiency, its potential for misinformation raises serious concerns about accountability and regulation. With AI models becoming increasingly integrated into daily life, experts stress the need for improved accuracy, transparency, and ethical oversight.
As for Holmen, he remains deeply disturbed by the AI-generated lie and fears the long-term damage it could do to his reputation. His case could set a precedent for future AI-related defamation lawsuits and push regulators to implement stricter safeguards against AI misinformation.
In other news:19-Year-Old Wife of Sadio Mane Gives Birth
Norwegian Man Files Complaint Against ChatGPT for Falsely Accusing Him of Murder