25 Chilling Horror Films You Must Experience Before You Shuffle Off This mortal Coil

From 'The Shining' to 'The Birds,' the 25 best scary movies to see before you DIE Love movies? Live for TV? USA TODAY's Watch Party newsletter has all the best recommendations, delivered right to your inbox. Sign up now and be one of the cool kids. Face it, folks, we're all gonna die. Whether it's via natural
HomeHealthEnhancing Trust and Fairness in AI Through Diverse Training Data Representation

Enhancing Trust and Fairness in AI Through Diverse Training Data Representation

“`html

Artificial intelligence (AI) tools, like home assistants, search engines, and large language models such as ChatGPT, might appear all-knowing, but their effectiveness relies heavily on the quality of the training data. Many users start using AI technologies without fully understanding the data that trained them or the sources of that data, including any biases present in it. A recent study by researchers at Penn State indicates that sharing details about this training data can help set realistic expectations for AI systems, allowing users to make better decisions about when and how to use these tools.

The research explored whether providing cues about racial diversity—visual indicators that inform users about the racial breakdown of the training data and the demographics of the crowd-sourced individuals who annotated it—can influence users’ perceptions of fairness and trust in AI algorithms. The findings were published in the journal Human-Computer Interaction.

S. Shyam Sundar, a professor at Penn State and director of the Center for Socially Responsible Artificial Intelligence, noted that AI training data often carries systematic biases related to race, gender, and other factors.

“Users might not realize they could be unintentionally supporting biased decision-making by utilizing certain AI tools,” he remarked.

Lead author Cheng “Chris” Chen, an assistant professor of communication design at Elon University, who received her PhD in mass communications from Penn State, pointed out that users typically lack the information necessary to assess the biases present in AI systems, as they often have no access to the details about the training data or the people involved in training.

“This bias manifests after a user completes their task, meaning the damage has already occurred, so users lack sufficient information to gauge their trust in the AI before engagement,” Chen explained.

Sundar proposed that informing users about the nature of the training data, particularly its racial makeup, could be beneficial.

“This is what we tested in our experimental study to see if it would influence users’ perceptions of the system,” Sundar stated.

To investigate how diversity cues affect trust in AI technologies, the researchers set up two experimental scenarios—one demonstrating diversity and the other lacking it. The diverse scenario included a brief overview of the machine learning model, data labeling practices, and a bar graph indicating an equal representation of facial images across three racial categories: white, Black, and Asian, each accounting for about one-third of the dataset. In contrast, the non-diverse setup showed that 92% of the images were from a single dominant racial group. The labelers’ backgrounds were similarly balanced in the diverse condition but skewed in favor of one group in the non-diverse case, with 92% from the dominant racial group.

Participants analyzed data cards that detailed the characteristics of the training data for an AI tool called HireMe, which classifies facial expressions. They then viewed automated interviews with three equally qualified male candidates of different races. The AI analyzed and displayed the candidates’ neutral expressions and tones in real-time, focusing on the foremost emotional expressions and assessing each candidate’s suitability.

In one scenario, half of the participants experienced a racially biased system performance, where the AI was manipulated to favor the white candidate, suggesting his neutral expression showed joy and deeming him a good fit for the job. Conversely, it inaccurately characterized the Black and Asian candidates’ expressions as anger and fear, respectively. The unbiased scenario accurately represented joy for all candidates, recognizing them equally as strong applicants. Participants then provided feedback on the AI’s evaluations, rating their agreement on a five-point scale and selecting the most accurate emotion if there was disagreement.

“Our results indicated that showcasing racial diversity in the training data and the backgrounds of labelers enhanced users’ trust in the AI,” Chen highlighted. “Moreover, having the chance to give feedback increased participants’ sense of agency and their likelihood of using the AI in future contexts.”

However, the researchers observed that allowing feedback on an unbiased system reduced usability for white participants, who perceived the system as already working fairly, thus feeling little incentive to contribute feedback and viewing it as an unnecessary task.

The study concluded that presenting multiple racial diversity cues operates independently; however, both the diversity of data and diversity among labelers effectively influences users’ perceptions of fairness in the system. They noted the representativeness heuristic, where users often assume the AI’s training must be racially inclusive if its composition aligns with their definition of diversity.

“If an AI model is trained predominantly on expressions labeled by individuals from one racial group, it risks misinterpreting the emotions of people from other racial backgrounds,” Sundar observed, who also serves as the James P. Jimirro Professor of Media Effects at the Penn State Bellisario College of Communications and co-director of the Media Effects Research Laboratory. “It’s crucial for the system to consider race when assessing emotions like cheerfulness or anger, which necessitates a diverse array of images and labelers during the training process.”

The researchers emphasized that for an AI system to be perceived as credible, users must have access to information about the origins of its training data, enabling them to evaluate and reflect on it before placing their trust in the system.

“Providing access to this information fosters transparency and accountability within AI systems,” Sundar remarked. “Even if users don’t seek out this information, its availability conveys a commitment to ethical practices, promoting fairness and trust in these technologies.”

“`