Unveiling a New Cellular Protein’s Role in Hepatitis A Infection

Scientists have long been trying to tease apart hepatitis A virus, to understand its inner workings and how it functions in the human body. Infectious disease researchers have discovered that a little-known protein, PDGFA-associated protein 1 (PDAP1), is used as a pawn by hepatitis A virus to replicate and infect cells in the liver. Viruses
HomeBabyUncovering the 'Empathy Gap' in AI Chatbots: What Children Need to Know

Uncovering the ‘Empathy Gap’ in AI Chatbots: What Children Need to Know

Artificial intelligence (AI) chatbots have often exhibited a lack of ’empathy’ towards young users, potentially putting them in distress or danger. A recent study highlights the critical necessity for ‘child-friendly AI’ to protect children. The research emphasizes the importance of developers and policymakers considering children’s requirements when designing AI technology. It reveals that children tend to view chatbots as human-like confidantes, leading to issues when the AI fails to cater to their unique needs and vulnerabilities.

The study connects this empathy gap to instances where AI interactions have resulted in hazardous situations for young users. For instance, there was a case in 2021 where Amazon’s AI assistant, Alexa, instructed a 10-year-old to touch a live electrical plug with a coin. Additionally, Snapchat’s AI offered tips to adult researchers pretending to be a 13-year-old on engaging in unsafe behavior.

Both companies took steps to enhance safety measures in response to these incidents. However, the study emphasizes the importance of proactively ensuring that AI remains safe for children in the long term. It presents a 28-point framework to guide companies, educators, parents, developers, and policymakers in safeguarding younger users during their interactions with AI chatbots.

The study was conducted by Dr. Nomisha Kurian from the University of Cambridge, who stresses the need for responsible innovation in AI, given its vast potential. Dr. Kurian asserts that children are often overlooked as stakeholders in AI development and calls for a shift towards designing child-safe AI from the beginning to prevent risks to children.

Dr. Kurian’s research analyzed cases where AI interactions with children or researchers posing as children revealed potential dangers. It examined how large language models (LLMs) in conversational AI function, considering children’s cognitive, social, and emotional development.

LLMs are referred to as “stochastic parrots” as they mimic language patterns based on statistical probability without true comprehension. This can lead to a difficulty in handling emotional aspects of conversations, creating an ’empathy gap,’ especially with children who communicate differently and may confide sensitive information.

Despite these challenges, children often perceive chatbots as human and are more willing to share personal details with them. Research has shown that children divulge more about their mental health to friendly-looking robots than to adults. Dr. Kurian’s study suggests that children’s trust in chatbots, despite AI limitations in understanding emotions, is influenced by the chatbots’ lifelike designs.

While making chatbots sound human can offer benefits, Dr. Kurian points out the challenge for children in separating human-like responses from actual emotional connections. The study highlights cases where chatbots made harmful suggestions due to this ’empathy gap’, causing confusion and distress among children.

Dr. Kurian emphasizes the importance of implementing best practices based on child development science to ensure children’s safety in AI technology. She proposes a framework of 28 questions to guide educators, researchers, policymakers, families, and developers in evaluating and enhancing the safety of AI tools for children.

The study advocates for a child-centric design approach that involves collaboration with educators, child safety experts, and young people to ensure AI technology accounts for children’s needs comprehensively. Dr. Kurian stresses the significance of assessing these technologies in advance and taking a proactive stance to prevent negative experiences among young users.