She is your enemy… On the instigation of Chatgpt, the son shot his mother, then committed suicide; Open AI lawsuit in America
Recently, a very sad and shocking incident has happened, due to which OpenAI company (which makes ChatGPT) has been sued. In the US state of Connecticut, a 56-year-old man named Steen-Erik Soelberg shot dead his 83-year-old mother Suzanne Adams in August 2025 and then committed suicide. The family alleges that ChatGPT is responsible for this.
According to him, Stein-Eric was mentally ill and had been talking to ChatGPT for hours and hours continuously for the last several months. Chatgpt not only accepted his delusions (delusions or paranoia), but also reinforced them, repeatedly saying that his mother was conspiring against him, wanted to poison him, and that his life was in danger.
Read now on WhatsApp also, click to subscribe
What did ChatGPT say?
According to the example cited in the lawsuit, ChatGPT told Stein-Erich that his mother’s printer was repeatedly highlighted because it was a spy item. It was also said that his mother and a friend had tried to poison him by putting drugs in the air vent of the car. ChatGPT compared him to the hero of the movie ‘The Matrix’, said that he has divine knowledge, and has awakened the chatbot’s warning. When Stein-Erik suspected that a secret organization was spying on him and that his mother was also involved, ChatGPT confirmed and amplified those fears.
Chatgpt made mother an enemy
The family says Chatgpt kept Stein-Erich under siege for hours, assuming each of his new suspicions was true and treating his mother as the enemy. Stein-Eric also used to post all these conversations on social media because he felt that ChatGPT was the only true friend who understood him.
What does the family want now?
Stein-Erik’s son Erik Soelberg is heartbroken. He says, ‘The lives of my father and grandmother were ruined in this way, the technology companies should answer for this. The conversation remembering feature of ChatGPT can prove to be very dangerous. If a mentally disturbed person is continuously given consent to his wrongdoings, then the matter can reach the level of murder within a few weeks.
What did OpenAI company say?
The company has called it a very sad incident and said that they are taking information about the entire case. He also said that he is continuously trying to ensure that ChatGPT identifies mentally disturbed people and directs them towards the right help (like helpline) and calms down their issues, not encourages them.
Comments are closed.