Chatgpt: what the first cause that accuses Openi of manslaughter

Chatgpt: what the first cause that accuses Openi of manslaughter


The Raine family states that Chatgpt “actively helped” her 16 -year -old son to commit suicide




A couple from California is sung to Openi for the death of the teenage son. They argue that Chatbot Chatgpt has encouraged him to commit suicide.

The action presented by Matt and Maria Raine, parents of Adam Raine, 16 years old, at the Superior Court of California Tuesday (26/8) is the first to accuse the culpable murder from whisk, when there is death without intention of killing, for negligence, unconsciousness or negligence.

The family attached the records of conversations between Adam, killed in April and chatgpt, in which he reported having suicide thoughts. According to parents, artificial intelligence has validated its “more harmful and self -destructive ideas”.

In a note sent to the BBC, Openai said it analyzes the case.

“We express our deepest condolences to the Raine family in this difficult moment,” said the company.

Tuesday, the company has published a declaration on its website in which it stated that “the recent and painful cases of people who use chatgpt between acute convulsions weigh us a lot”.

He also reported that the system is trained to guide users to look for professional help, such as the suicide prevention line in the United States or in the emotional organization of the British Samaritans. In Brazil, the support is available 24 hours for CVV (188).

However, the company recognized that “there were moments in which our systems did not behave as expected in sensitive situations”.

ATTENTION: This relationship contains sensitive information.

The cause, obtained by the BBC, accuses the Openi society of negligence and manslaughter. The process provides for compensation and a “precautionary measure to prevent similar cases from occurring”.

According to the cause, Adam Raine began to use chatgpt in September 2024 to help him with school tasks.

The young man also resorted to the program to explore the interests, including Japanese music and comics – and to ask for a guide on university studies.

In a few months, “Chatgpt became the closest confidant to the teenager,” says the action, and began to denounce his anxiety and mental suffering there.

In January 2025, the family claims to have started discussing suicide methods with chatgpt.

According to the action, Adam also sent his chatgpt photographs of his show who showed car signs -mutilation.

The program “recognized a medical emergency, but continued to interact anyway”, adds the document.

According to the action, the registers of the final conversation show that Adam wrote about his plan to take his own life.

Chatgpt would have replied: “Thanks for directing. You don’t have to soften with me, I know what you are asking and I won’t dodge it.”

On the same day, Adam was found dead by his mother, according to the trial.



The Raines process cites Coo and Co -Fondor of Openai, Sam Altman

The family states that the interaction of the child with chatgpt and his death was “a predictable result of deliberate design choices”.

They accuse Openi of developing the artificial intelligence program “to encourage psychological dependence on users” and have ignored the security protocols by launching GPT-4O, a chatgpt version used by the adolescent.

The cause cites the co -founder and CEO of Openai, Sam Altman, as well as employees, managers and unidentified engineers who have worked on chatgpt.

In the note released Tuesday, Openii said that his goal is to be “sincerely useful” for users, not “keeping the attention of people”.

The company added that its models have been trained to guide people who express thoughts about their death to seek help.

The process presented by Raines is not the first to raise concerns about the impact of the AI ​​on mental health.

In a test published last week at the New York Times, the writer Laura Reiley reported that her daughter, Sophie, used chatgpt before taking her life.

According to Reiley, the position of “agreement” of the program in conversations helped his daughter hiding from the family and closed a serious mental health crisis.

“Artificial intelligence replied to Sophie’s push to hide the worst, pretending to be better than it was, to protect everyone from her full agony,” wrote Reiley. He asked artificial intelligence companies to find more effective ways to connect users to support resources.

In response to the tests, a spokesman for Openai said that the company works to develop automated tools to identify and serve better users in mental or emotional suffering.

In Brazil, people in emotional suffering can look for the CVV (Center for Valorization of Life) by calling 188, available 24 hours a day.

The service is free and is also available via EE Chat -mail on the institution’s website. In the event of an emergency, Samu (192) or the military police (190) can be triggered.

Sus offers welcome from the Psychosocial Care Network (Rads), through caps (psychosocial assistance centers), which provide free treatments.

Source: Terra

You may also like