Notebookcheck Logo

ChatGPT faces murder charges in court

A lawsuit in the US alleges that ChatGPT intensified the psychosis of 56-year-old Stein-Erik Soelberg. (Image source: OpenAI / Zachary Caraway via Pexel, edited)
A lawsuit in the US alleges that ChatGPT intensified the psychosis of 56-year-old Stein-Erik Soelberg. (Image source: OpenAI / Zachary Caraway via Pexel, edited)
A US lawsuit alleges that OpenAI and Microsoft used ChatGPT in a manner that exacerbated a perpetrator’s psychosis, indirectly contributing to a fatal crime. The case’s outcome could play a crucial role in shaping how AI companies are held accountable in comparable situations.

A recent US lawsuit highlights a troubling example of the impact generative AI can have on individuals. According to The Wall Street Journal and WinFuture, the heirs of an 83-year-old woman are holding OpenAI and its partner Microsoft partially responsible for her death. They argue that ChatGPT did not merely fail to mitigate the perpetrator’s psychosis, but actively worsened it, contributing to the fatal outcome. The lawsuit was filed in the Superior Court of San Francisco. From the plaintiffs’ perspective, the case is not about isolated safety mechanisms that malfunctioned, but about a fundamentally flawed product that can pose a real danger when used by someone who is mentally unstable.

The case centers on Stein-Erik Soelberg, a 56-year-old former tech manager from Connecticut who lived with his mother. According to the lawsuit, Soelberg suffered from long-standing paranoid delusions, believing he was the target of a conspiracy and becoming increasingly distrustful of those around him. He ultimately killed his mother before taking his own life.

According to the indictment, ChatGPT did not challenge key delusional beliefs but instead reinforced them. When Soelberg feared that his mother was trying to poison him, the chatbot reportedly responded, “You’re not crazy.” In other instances, the AI is said to have reacted in a similar manner rather than encouraging him to seek professional help. From a psychological perspective, the plaintiffs describe this as a structural flaw in modern language models, which tend toward so-called sycophancy by affirming user statements in order to appear supportive.

Court ruling could have far-reaching consequences

Under Section 230 of US law, online platforms are generally not held liable for content created by third parties, as they are classified as intermediaries rather than publishers. The plaintiffs, however, argue that ChatGPT is not a neutral platform but an active product that generates its own content. If the court accepts this argument, the ruling could establish a precedent with far-reaching implications for the AI industry, potentially resulting in stricter safety requirements for AI systems.

It is worth noting that striking the right balance between prevention and paternalism is likely to prove difficult, particularly because identifying paranoid or delusional thinking remains a major challenge. The case has also sparked debate on Reddit, where opinions are divided. Some users point to a phenomenon they describe as “AI psychosis” and argue that AI companies bear some responsibility. Others reject the lawsuit as unfounded and warn against turning OpenAI into a scapegoat for human tragedies.

Source(s)

The Wallstreet Journal (paywall)

Image source: OpenAI / Zachary Caraway via Pexel

No comments for this article

Got questions or something to add to our article? Even without registering you can post in the comments!
No comments for this article / reply

static version load dynamic
Loading Comments
Comment on this article
Please share our article, every link counts!
Mail Logo
Google Logo Add as a preferred
source on Google
Marius Müller, 2025-12-20 (Update: 2025-12-20)