Study finds AI chatbots feed our own bias back to us

Artificial Intelligence (AI) chatbots are increasingly inclined to echo the views of people who use them, according to US researchers who found that platforms limit what information they share depending on who is asking.

Study finds AI chatbots feed our own bias back to us
Source: IANS

New York, May 14 (IANS/DPA) Artificial Intelligence (AI) chatbots are increasingly inclined to echo the views of people who use them, according to US researchers who found that platforms limit what information they share depending on who is asking.

"Because people are reading a summary paragraph generated by AI, they think they’re getting unbiased, fact-based answers," said Ziang Xiao of Johns Hopkins University in Baltimore.

But such assumptions are largely wrong, Xiao and colleagues argue after looking at the results of tests involving 272 participants asked to use standard internet searches or AI to help them write about news topics in the US, such as health care and student loans.

The "echo chamber" effect is louder when people seek information from a chatbot using large language models (LLMs) than via conventional searches, the team found.

"Even if a chatbot isn’t designed to be biased, its answers reflect the biases or leanings of the person asking the questions. So really, people are getting the answers they want to hear," Xiao said, ahead of presenting the team’s findings at the Association of Computing Machinery’s CHI conference on Human Factors in Computing Systems.

The research, published in the science journal Cell Press, meanwhile showed AI to be sometimes able to bluff even card sharks while playing poker and to come out best in diplomacy simulations.

The researchers found the bots to be capable of a form of sycophancy, as they were "observed to systematically agree with their conversation partners, regardless of the accuracy of their statements" and "to mirror the user’s stance, even if it means forgoing the presentation of an impartial or balanced viewpoint."