NFL playoff bracket: Conference championship schedule and odds for next weekend

NFL playoff bracket: Conference championship schedule and odds for next weekend This weekend's four NFL divisional playoff games offered an interesting mix of contrasts (Saturday) and similarities (Sunday). Play began outdoors Saturday in Kansas City's 20-degree weather with the Chiefs and Texans – two teams who managed just over three touchdowns a game in the
HomeSocietyThe Ethical Accountability of Human-like AI: Who's Responsible for Moral Missteps?

The Ethical Accountability of Human-like AI: Who’s Responsible for Moral Missteps?

A recent study reveals that people are more likely to blame artificial intelligences (AIs) for moral wrongdoings when they view these AIs as having human-like characteristics. These findings were presented by Minjoo Joo from Sookmyung Women’s University in Seoul, Korea, in the open-access journal PLOS ONE on December 18, 2024.

Previous research has shown that individuals often hold AIs responsible for various ethical violations, such as an autonomous vehicle’s collision with a pedestrian or harmful decisions in medical or military contexts. Additional studies indicate that blame is more often directed towards AIs that are thought to possess awareness, reasoning, and planning abilities. This inclination may arise from people’s tendency to view AIs with human-like minds as capable of experiencing conscious emotions.

Building on this earlier work, Joo posited that AIs perceived to have human-like minds would be assigned more blame for moral violations.

To investigate this notion, Joo conducted several experiments where participants encountered different real-world scenarios involving AIs committing moral offenses—like racist auto-tagging of images—and were asked to assess their perceptions of the AI’s mind and the level of blame they attributed to the AI, its programmer, the company behind it, or the government. In some scenarios, the AI’s “mind perception” was altered by providing details such as its name, age, height, and hobbies.

The results indicated that participants generally placed significantly more blame on an AI perceived as having a more human-like mind. This shift in perspective led participants to assign less blame to the associated company when evaluating relative blame. However, when asked to rate blame for each party independently, the blame assigned to the company did not decrease.

These results highlight the importance of how we perceive the minds of AIs in determining blame for their transgressions. Joo also raises concerns about the risks of leveraging AIs as scapegoats and advocates for additional research on how blame is assigned in AI-related instances.

The author concludes: “Can AIs be held accountable for moral failures? This study indicates that viewing AIs as human-like escalates blame towards them while diminishing responsibility assigned to human stakeholders, which raises significant concerns about the potential misuse of AI as a moral scapegoat.”