A recent study conducted by researchers at UCL reveals that artificial intelligence (AI) systems tend to inherit and enhance human biases, leading users of these AIs to develop their own biases as well.
According to the research published in Nature Human Behaviour, both human and AI biases can create a feedback cycle, where even minor initial biases can increase the chances of human mistakes.
The researchers highlighted that AI bias can have significant real-world impacts, noting that individuals who interacted with biased AI systems were more inclined to undervalue the performance of women and to overrate the likelihood of white men securing high-status jobs.
Professor Tali Sharot, a co-lead author from UCL Psychology & Language Sciences, the Max Planck UCL Centre for Computational Psychiatry and Ageing Research, and MIT, stated, “Humans are inherently biased, so when we train AI on data created by people, the AI algorithms pick up on the human biases present in that data. Consequently, AI tends to exploit and magnify these biases to improve its predictive accuracy.”
“We found that individuals who engage with biased AI systems can become even more biased themselves, triggering a potential snowball effect. Small biases in the original datasets can be amplified by the AI, further intensifying the biases of the user.”
The researchers undertook a series of experiments involving over 1,200 participants who performed various tasks while interacting with AI systems.
To set the stage for one of the experiments, the researchers trained an AI algorithm using a dataset of responses from participants. They asked individuals to evaluate whether a group of faces in a photograph appeared happy or sad. The results indicated a slight bias where participants tended to classify faces as sad more frequently. The AI absorbed this bias and exaggerated it, leading to an even stronger inclination to judge faces as sad.
In a subsequent task, a different group of participants was informed about the AI’s judgments for each photo. After spending time interacting with this AI system, this group internalized the AI’s bias and became even more likely to describe faces as sad than they were before the interaction. This illustrates how the AI adopted a bias from a human-generated dataset and then amplified the existing biases of another group.
Similar patterns emerged in experiments utilizing diverse tasks, such as evaluating the movement of dots on a screen or judging another person’s performance in a task. The findings particularly revealed that people were more prone to overestimate men’s performance after interacting with a biased AI system, which was intentionally designed with a gender bias to reflect the biases typical of many existing AIs. Notably, the participants were largely unaware of the extent to which AI influenced their judgments.
When individuals were misled into thinking they were interacting with another person, but were actually engaging with an AI, they internalized the biases to a lesser degree. The researchers suggest this may be due to the expectation that AI is more accurate than human beings in certain tasks.
Additionally, the researchers experimented with a popular generative AI system, Stable Diffusion. In one trial, they prompted the AI to produce images of financial managers, which resulted in biased outcomes with an overrepresentation of white men. They then asked participants to view a series of headshots and identify which individual was most likely to be a financial manager, both before and after viewing the AI-generated images. The findings indicated that after seeing the images created by Stable Diffusion, participants were even more likely to select a white man as the most probable financial manager than before.
Dr. Moshe Glickman, another co-lead author from UCL Psychology & Language Sciences and the Max Planck UCL Centre, remarked, “Biased individuals not only contribute to the creation of biased AIs but also, biased AI systems can shape people’s beliefs, causing users of these AI tools to become increasingly biased in areas ranging from social judgments to basic perceptions.”
“Importantly, we also discovered that engaging with accurate AIs can enhance people’s judgments, so it’s crucial to refine AI systems to be as unbiased and accurate as possible.”
Professor Sharot added, “Algorithm developers hold significant responsibility in designing AI systems, as the influence of AI biases could have profound implications in our increasingly AI-driven world.”