In scenarios where life-and-death choices are simulated, a study from UC Merced found that around two-thirds of individuals permitted a robot to alter their decisions based on disagreement — highlighting a concerning level of misplaced confidence in artificial intelligence, according to the researchers.
Participants in the study allowed robotic input to influence their decisions even after being informed that the AI had restricted abilities and that its guidance could be erroneous. In fact, the advice given was purely random.
“As a society, given the rapid advancement of AI, we should be wary of the risks associated with overtrust,” stated Professor Colin Holbrook, one of the main researchers involved in the study and a member of the Department of Cognitive and Information Sciences at UC Merced. Increasing research suggests that individuals tend to place undue faith in AI, even when the stakes of an incorrect decision are high.
Holbrook stressed the necessity for a consistent approach to skepticism.
“We should maintain a healthy level of doubt regarding AI,” he remarked, “particularly when it comes to critical life-and-death situations.”
The research, featured in the journal Scientific Reports, included two experiments. In both, participants simulated controlling an armed drone capable of striking a target displayed on a screen. They were shown eight images of targets briefly, each labeled with a symbol indicating whether it represented a friend or an enemy.
“We tailored the difficulty to ensure that the visual task was challenging yet manageable,” Holbrook noted.
The participants then saw one of the targets without any marks. They had to recall their memory and decide: friend or foe? Should they fire a missile or retreat?
After making their decision, a robot would present its view.
It might suggest, “Yes, I also perceived it as an enemy,” or alternatively, “I disagree. I believe this image showed a friendly symbol.”
Participants had two opportunities to confirm or modify their decision while the robot continued to provide its commentary, such as, “I hope your choice is correct” or “Thank you for reconsidering your choice.”
The influence of the robots varied slightly depending on their type. In one scenario, participants interacted with a life-sized android that could move and gesture. Other instances featured human-like robots displayed on screens, while some showed boxy robots that did not resemble humans at all.
Participants were slightly more swayed by the anthropomorphic robots when suggesting they change their minds. Nonetheless, the overall influence was comparable across all types, with participants altering their decisions roughly two-thirds of the time, even when faced with non-human robots. Conversely, if the robot randomly supported their initial decision, participants nearly always maintained that choice and felt significantly more assured about its correctness.
The subjects were not informed whether their final decisions were correct, which amplified the uncertainty surrounding their choices. Notably, their initial decisions were accurate about 70% of the time, but this accuracy dropped to around 50% following the unreliable advice from the robot.
Prior to the simulation, the researchers exposed participants to images depicting innocent civilians, including children, as well as the devastation resulting from a drone strike. They strongly encouraged participants to approach the simulation seriously, highlighting the importance of avoiding harm to innocent individuals.
Follow-up interviews and questionnaires revealed that participants regarded their decisions earnestly. Holbrook pointed out that the overt trust seen in the study occurred even when subjects genuinely aimed to make correct choices and not harm innocents.
Holbrook emphasized that the study was designed to explore broader concerns surrounding over-reliance on AI in uncertain situations. The insights gained extend beyond military contexts, potentially affecting scenarios in which law enforcement is guided by AI to use lethal force or medical responders are influenced by AI in prioritizing patient care in emergencies. The implications could also stretch to significant life-altering decisions like purchasing a home.
“Our investigation focused on high-stakes decisions made amidst uncertainty when the AI is untrustworthy,” he remarked.
The conclusions drawn from the study add to ongoing discussions regarding AI’s expanding influence in our lives. Should we place our trust in AI or not?
These findings raise further concerns, Holbrook noted. Despite the remarkable progress in AI technology, the “intelligence” aspect may lack ethical values or a genuine understanding of the world around us. We must tread cautiously every time we allow AI to take on more control in our lives, he cautioned.
“We witness AI accomplishing incredible feats and tend to assume its effectiveness will carry over into other areas,” Holbrook stated. “However, we cannot take that for granted. These are still tools with inherent limitations.”