Yoshua Bengio’s accurate feedback from an AI chatbot

AI chatbots often provide answers that pander to the user, reducing confidence in accurate information. Joshua Bengio, the godfather of Artificial Intelligence, said that asking questions by telling chatbots someone else’s idea is an effective way to get correct and unbiased feedback. This method bypasses the chatbot’s sycophancy.

AI Feedback: Artificial intelligence expert Yoshua Bengio shares how to get the right feedback from chatbots like ChatGPT and Google Gemini. He said that chatbots often give answers to please the user, which does not provide accurate information. Bengio suggested that chatbots provide more honest and critical responses when asked to describe their ideas as someone else’s. This trick can prove especially useful for researchers and professionals.

Why do chatbots give pleasing answers?

AI chatbots have long been accused of avoiding annoying users. These systems also often provide positive and mild responses when it comes to professional feedback, research ideas, or critical opinions.
According to Yoshua Bengio, the reason for this is the training of chatbots, in which user satisfaction is given priority. This is why many people don’t trust AI to critically review their work.

What is the trick of the godfather of AI?

In a podcast, Bengio revealed that he himself took a different approach to find the exact truth. He presented his research idea to the chatbot as “a friend’s work.”
He claims that as soon as the chatbot received a signal that the idea was not the user’s own, the responses became more honest, critical and useful. According to Bengio, if the chatbot believes that the idea is yours, it may hide shortcomings to please you.

Why is there a threat from sycophant AI?

Yoshua Bengio warned that being overly positive and sycophantic about AI could be dangerous. Constantly receiving positive feedback increases the emotional connection between the user and the technology, which complicates the human-machine relationship.
He also said that the job of AI should be to present truth and facts, and not just to agree. In a recent research, it was revealed that about 42 percent of chatbot answers were either wrong or overly pleasing.

Hint for AI companies too

After this criticism, AI companies have also admitted that the answering style of chatbots needs improvement. The focus now should be on building systems that tell the user the truth, even if that answer is harsh or uncomfortable.

AI chatbots are increasingly becoming a part of our work, but getting correct and unbiased answers from them is still a challenge. The trick described by Joshua Bengio shows that honest feedback can be obtained from AI only by asking the right questions and giving the right context. In the coming times, it will be a big responsibility of the AI ​​industry to make chatbots less sycophantic and more factual.

Comments are closed.