Innovative Research Paves the Path to Enhanced Brain Study Reliability

A new study identifies research strategies for tying brain function and structure to behavior and health. Brain-wide association studies, which use magnetic resonance imaging to identify relationships between brain structure or function and human behavior or health, have faced criticism for producing results that often cannot be replicated by other researchers. A new study published
HomeEnvironmentExploring the Boundaries of AI-Generated Empathy: A New Study

Exploring the Boundaries of AI-Generated Empathy: A New Study

Conversational agents (CAs) like Alexa and Siri are programmed to respond to inquiries, give recommendations, and even show empathy. However, recent studies show that they struggle in comparison to humans when it comes to understanding and delving into a user’s experience.

Conversational agents (CAs) like Alexa and Siri are programmed to respond to inquiries, give recommendations, and even show empathy. However, recent studies show that they struggle in comparison to humans when it comes to understanding and delving into a user’s experience.

CAs are powered by large language models (LLMs) that ingest massive amounts of human-produced data.

Automated systems rely on human-provided data and are therefore susceptible to the same biases as people. Researchers from Cornell University, Olin College, and Stanford University conducted a study to examine how virtual assistants (CAs) demonstrate empathy when interacting with or discussing various human identities. The study revealed that CAs tend to make subjective judgments about certain identities, such as those related to LGBTQ and Muslim communities, and can even express support for identities associated with harmful ideologies like Nazism. The researchers believe that automated empathy has the potential to have a significant positive impact, particularly in fields such as education and healthcare.d lead author Andrea Cuadra, who is now a postdoctoral researcher at Stanford,”It’s highly unlikely that automated empathy won’t become a reality,” she said, “so it’s crucial that we approach it with critical perspectives in order to be more intentional about minimizing potential negative effects.”

Cuadra will be presenting “The Illusion of Empathy? Notes on Displays of Emotion in Human-Computer Interaction” at CHI ’24, the Association of Computing Machinery conference on Human Factors in Computing Systems, which will take place from May 11-18 in Honolulu. Research co-authors from Cornell University included Nicola Dell, associate professor, and Deborah Estrin, professor.The study was conducted by researchers Dell, Estrin, and Jung, and found that LLMs generally received high ratings for emotional reactions, but had lower scores for interpretations and explorations. This means that while LLMs are able to respond to queries based on their training, they struggle to go into deeper analysis. The inspiration for this work came when Cuadra was studying the use of earlier-generation CAs by older adults. Cuadra observed interesting uses of the technology, including frailty health assessments and other transactional purposes.Estrin noted that the study focused on open-ended reminiscence experiences, and along the way, clear instances of the tension between compelling and disturbing ’empathy’ were observed. The research was funded by the National Science Foundation, a Cornell Tech Digital Life Initiative Doctoral Fellowship, a Stanford PRISM Baker Postdoctoral Fellowship, and the Stanford Institute for Human-Centered Artificial Intelligence.