Revolutionizing Human Action Recognition: A Groundbreaking Leap in AI Technology

Researchers develop an AI-driven video analyzer capable of detecting human actions in video footage with precision and intelligence. What if a security camera could not only capture video but understand what's happening -- distinguishing between routine activities and potentially dangerous behavior in real time? That's the future being shaped by researchers at the University of
HomeHealthSympathy Swells for AI Bots Facing Online Bullying

Sympathy Swells for AI Bots Facing Online Bullying

In a recent research study, individuals showed compassion towards AI bots that were excluded from a game.

A study conducted by scientists at Imperial College London highlighted how humans expressed empathy and protected AI bots that were left out of playtime.

The findings, which emerged from a virtual ball game experiment, underscore the human inclination to regard AI agents as social entities. This perspective should be a consideration when creating AI bots.

This research appears in the journal Human Behavior and Emerging Technologies.

Jianan Zhou, the lead author from Imperial’s Dyson School of Design Engineering, stated, “This offers a unique perspective on human interaction with AI, with exciting implications for AI design and our understanding of psychology.”

As people increasingly use AI virtual agents for services and social companionship, these results suggest that designers may want to refrain from making these agents too human-like.

Dr. Nejra van Zalk, the senior author and also from Imperial’s Dyson School of Design Engineering, remarked, “A small yet growing body of research presents mixed findings on whether humans view AI virtual agents as social beings. This opens up significant questions about human perceptions and interactions with such agents.”

“Our findings indicate that participants typically treated AI virtual agents as social beings by attempting to include them in the ball game whenever they noticed the AI was being excluded. This reaction mirrors typical human interactions and was observed even when participants knew they were engaging with a virtual agent. Interestingly, older participants demonstrated this effect more strongly.”

Humans Dislike Exclusion — Even for AI

Empathy and a desire to rectify unfairness appear to be inherent traits in most people. Earlier studies that didn’t involve AI revealed that when individuals saw someone ostracized, they often compensated by throwing the ball to the excluded person more frequently and developed a dislike for the one engaging in exclusionary behavior while favoring the victim.

To conduct this study, researchers examined the responses of 244 human participants who watched an AI virtual agent being excluded in a game called ‘Cyberball,’ where players pass a virtual ball to each other on a screen. The ages of participants ranged from 18 to 62 years.

In some scenarios, the human player passed the ball fairly to the bot, while in others, they blatantly excluded the bot by only passing to the human participant.

Researchers monitored and surveyed participants to understand if they favored passing the ball to the bot after it experienced unfair treatment and why they felt that way.

The results revealed that most participants tended to correct the perceived injustice towards the bot by throwing the ball to it more often. Older individuals were more attuned to the unfairness.

Human Awareness

The researchers indicate that as AI virtual agents become more integrated into collaborative tasks, increased interactions with humans could lead to a higher degree of familiarity, potentially resulting in intuitive engagement where users perceive bots as actual team members.

This could be beneficial for teamwork but raises concerns when these virtual agents are perceived as substitutes for human friends or used as advisors on emotional or physical health matters.

Jianan said, “By steering clear of designing overly human-like agents, developers can help individuals better differentiate between virtual and real interactions. They can also tailor their designs for various age groups, considering how different human traits influence our perceptions.”

The researchers noted that Cyberball may not accurately reflect real-life interactions, which typically involve spoken or written communication with chatbots or voice assistants. This mismatch might have affected some participants’ expectations and contributed to feelings of awkwardness during the study.

As a result, they are now planning to conduct similar experiments with face-to-face conversations using agents in different settings, including labs and more informal environments, to further investigate the extensiveness of their findings.