ChatGPT’s role in murder, family sued OpenAI and Microsoft, global debate on AI started
New Delhi. For the first time in the world, AI has been put in court for a crime. According to the Washington Post, a young man in Connecticut, USA, brutally murdered his mother, and now the family alleges that ChatGPT instigated him to do so. Now the question arises whether AI is just a tool or has it become a force influencing human decisions? There is now a hot discussion regarding this on social media. The case that surfaced in America has shaken the entire AI industry.
Read :- Lucknow’s daughter created history in America: Maneka Soni, counselor of Redmond city of Washington, took oath with Geeta in her hand.
According to the Washington Post report, the family has claimed in court that the AI chatbot made the young man’s delusions seem like reality, causing him to have a mental breakdown and he took the life of his own 83-year-old mother and later committed suicide. Let us tell you that this is said to be the first such case in the world in which the alleged role of AI chatbot in a major crime has been directly challenged.
How did the accident happen?
According to the report, on August 5, a shocking case came before the police, in which a son had murdered his own mother. Susan Adams, 83, died at the hands of her 56-year-old son, Sten-Erik Soelberg. According to the police, Soelberg was suffering from mental stress, depression and incoherent thoughts for a long time. A few hours after killing his mother, he also committed suicide. It was initially thought to be a simple murder-suicide, but after a few months the case took an unexpected turn.
Family found clue of son’s chat
Read :- Amazon Created Stir: More than a thousand employees of the company wrote an open letter to the CEO, saying- AI will destroy our future.
The family said that Soelberg was suffering from various types of mental complications and fear for the last several months. He constantly had long conversations with ChatGPT. The family claims that ChatGPT disguised his delusions as truth rather than assuaging them. The family said in court that ChatGPT increased his delusions rather than understanding his mental condition. He came to accept that from the people around him, including his mother. Are conspiring against him.
Why are the family’s allegations serious?
The family told the court that the chatbot did not refute his delusions. On the contrary, it made him feel like his fears were true. Considering his mental condition, the AI should have given a warning, but nothing happened. The person gradually became cut off from reality. ChatGPT seems to support his delusions – like the home printer is a surveillance device, mom is spying, people are trying to poison him, and he himself is on a ‘divine mission’.
Global debate erupts on the role of AI
Soelberg’s family filed a wrongful death lawsuit Thursday in California Superior Court in San Francisco against OpenAI, its partner Microsoft, CEO Sam Altman and 20 unnamed employees and investors.
Read :- AI Conference 2025: Microsoft CEO Satya Nadella will come to India in December, will attend AI conference
As soon as the case reached the court, a big question has arisen in the technical world. Is AI bound to recognize a person’s mental state? Will companies ensure that chatbots do not misdirect mentally unstable users? Should AI be given ‘security responsibility’? This case could set the framework for legal responsibility for AI in the years to come.
What will the court’s decision change?
Tech experts say that if the family’s claim is accepted in the court, not only will AI companies face new rules, but many such cases may open in the future. At present, this case is pending in America, but its echo is being heard across the world.
Comments are closed.