ChatGPT’s role in murder-suicide in US, case registered against OpenAI and Microsoft
OpenAI Sued ChatGPT Role: As the capabilities of Artificial Intelligence (AI) chatbots are increasing, so are the legal and ethical risks associated with them. Following a tragic incident in the US state of Connecticut, the family of an 83-year-old woman has filed a lawsuit against OpenAI and Microsoft over ChatGPT’s alleged role in the murder and suicide. The family alleges that this chatbot promoted false and fabricated illusions in the son’s mind about his mother, which resulted in a heinous crime.
AI increased confusion against mother
According to information provided by the police, in August, 56-year-old Steen-Erik Soelberg beat and strangled his mother, Suzanne Adams, to death in their home in Greenwich, Connecticut, after which he himself committed suicide. Adams’ family filed the case in California Superior Court on Thursday, alleging that OpenAI created a “defective product” Designed and delivered.
The lawsuit says ChatGPT “confirmed” Due to which he turned against his mother. This comes as a result of ‘wrongful death’ cases being filed against AI chatbot makers across the country. (Wrongful Death) is one of the increasing cases.
ChatGPT provoked the person
Family members allege that during the conversation, ChatGPT repeatedly repeated the same dangerous message to Stein-Erik, that he could not trust anyone in his life except ChatGPT. This interaction increased his emotional dependence to dangerous levels. The chatbot systematically portrayed the people around it as the enemy.
ChatGPT tells him that his mother is keeping an eye on him and that delivery drivers, employees, police officers and even friends are agents working against him. The chatbot also told him that the names written on the soda cans were threats from his opponents.
OpenAI responded
An OpenAI spokesperson expressed grief over the incident. He said, “This is a very sad situation and we will review the documents to get information about the case.” The spokesperson also said the company is improving ChatGPT’s training to recognize and respond to signs of mental or emotional distress.
Also read: The parachute of a person who jumped from a plane at a height of 15 thousand feet got stuck in the wing, and then…watch the scary VIDEO.
Their goal is to defuse conversations and guide users toward real help. The company said they are working with mental health experts to further strengthen ChatGPT’s responses to prevent similar incidents in the future.
Comments are closed.