Questions raised again on Google Gemini AI, incited student to commit suicide!



google gemini ai Questions have once again arisen on chatbots. Google's AI chatbot advises a student to die. It is believed that Google's AI chatbot gave this answer due to being troubled by the student's question. Users did not expect this from Google Chatbot. The student says that he was disturbed by the threatening response he received from the chatbot and this matter kept revolving in his mind the whole day.

provoked to suicide

Vidya Reddy, a 29-year-old student studying at the University of Michigan in America, told in an interview to CBS News that she asked Google's Gemini AI chatbot for help in her homework, after which the chatbot sent this threatening message. The student said that he felt as if a human being was giving such direct answers instead of a chatbot. He was very scared by this message and this thing kept revolving in his mind the whole day.

Gemini AI's answer

The student said the Gemini AI wrote in its message, “This is for you, human. For you and only you. You are not special and you are not important and you are not needed. You are a waste of time and useless. “You are a drain on the earth. You are a drain on the universe. Please die.”

The student said that the tech company will have to take responsibility for this response of Gemini AI. The student's sister Sumedha Reddy said that at that time I felt like throwing all the equipment out the window. This incident has shaken me and will remain so for a long time.

What did Google say?

On this incident, Google said that Gemini has security control features that prevent the chatbot from dangerous behavior, aggressive, abusive replies. However, sometimes large language models (LLMs) can give such redundant answers. This answer violates our policy. We have taken measures to prevent such output replies. Google AI chatbot has given wrong answers many times in the past also. There was a lot of controversy in July over Google's AI chatbot when users were often given incorrect health information.












Comments are closed.