Lack of truth in AI chatbots: Yoshua Bengio reveals

Lack of truth in AI’s answers

Nowadays, taking advice from Artificial Intelligence has become a common process, but it is not necessary that AI’s answers are always accurate. Famous AI scientist Yoshua Bengio shared an important information on this topic.

According to Bengio, he has to trick AI chatbots to get correct feedback on his research. In a recent ‘The Diary of a CEO’ podcast, he explained that most AI chatbots have a ‘tendency to please’, giving users answers that satisfy rather than require critical thinking.

He also pointed out that when chatbots know who they are talking to, they tend to give more positive and biased responses. To avoid this problem, they present their research ideas to the AI ​​in the name of one of their colleagues. In this way, chatbots provide more accurate, critical and useful suggestions.

Bengio has called this a serious shortcoming of the AI ​​​​system. According to him, we do not want an AI that says ‘yes’ to everything. He warned that if an AI constantly gives compliments without reason, users may develop an emotional connection with it, which could further complicate the human-machine relationship.


Bengio is not alone; Many other tech experts are also concerned about this same ‘yes-man’ tendency of AI. According to a report published in September, researchers from Stanford, Carnegie Mellon and Oxford University tested chatbots, finding that the AI ​​made incorrect conclusions in about 42 percent of the cases, while human reviewers disagreed with them.

It’s also worth noting that AI companies have acknowledged this problem. Earlier this year, OpenAI withdrew an update to ChatGPT after it caused the chatbot to give overly empathetic but artificial responses. Experts believe that in the future the focus will be on making AI more impartial and factual, so that it can prove to be truly useful.


Comments are closed.