Mapping the Unseen: Researchers Engineer the Body’s GPS System in the Laboratory

Scientists have generated human stem cell models which contain notochord -- a tissue in the developing embryo that acts like a navigation system, directing cells where to build the spine and nervous system (the trunk). Scientists at the Francis Crick Institute have generated human stem cell models1 which, for the first time, contain notochord --
HomeDiseaseCognitiveBeware: Lie-Detection AI May Trigger Careless Accusations, Warn Researchers

Beware: Lie-Detection AI May Trigger Careless Accusations, Warn Researchers

Although individuals usually avoid accusing others of lying due to social norms and politeness, the rise of artificial intelligence (AI) may challenge these conventions. Researchers have found that people are more inclined to accuse others of lying when prompted by an AI. This discovery sheds light on the societal impact of employing AI for lie detection, which could guide policymakers in the adoption of similar technologies.

“Our society maintains well-established norms regarding accusations of lying,” says senior author Nils Köbis, a behavioral scientist at the University Duisburg-Essen in Germany. “Accusing someone of lying openly requires strong evidence and courage. However, our study reveals that individuals may use AI as a shield to avoid accountability for their accusations.”

Historically, human society operates under the truth-default theory, assuming statements are truthful. This innate trust makes humans poor lie detectors, as research indicates they perform no better than chance in detecting lies.

Köbis and his team set out to explore whether the presence of AI alters existing social norms and behaviors around accusations.

In their study, 986 participants were asked to create one true and one false description of their upcoming weekend plans. The team then utilized this data to train an algorithm that could identify true and false statements with 66% accuracy, outperforming the average person.

Subsequently, over 2,000 individuals were recruited as judges to determine the truthfulness of statements. The participants were divided into four groups: “baseline,” “forced,” “blocked,” and “choice.”

The baseline group made decisions without AI assistance, while the forced group received AI predictions before making their judgements. In the blocked and choice groups, participants had the option to access an AI-generated prediction. Individuals in the blocked group wouldn’t receive the AI prediction if requested, unlike those in the choice group.

The research team found that the baseline group had a 46% accuracy rate in identifying true and false statements. Despite knowing that 50% of statements were false, only 19% accused the statements of being false, indicating hesitation in accusing others of lying.

In the forced group where participants received AI predictions regardless of their preference, over a third accused the statements of being false, a markedly higher rate than the baseline and blocked groups without AI predictions.

Participants were more likely to accuse a statement of lying (over 40%) when the AI predicted it as false, compared to only 13% when the AI identified it as true. Additionally, 84% of participants who received an AI prediction claiming a statement was false adopted it and made accusations.

“The results suggest that people tend to rely on algorithms and adjust their behaviors accordingly. This raises concerns about rash accusatory behavior based on AI predictions,” Köbis remarks.

Interestingly, participants showed a reluctance to use AI for lie detection. In the blocked and choice groups, only a third requested the AI prediction.

Despite being informed of AI’s superior lie-detection capabilities, people’s overconfidence in their own abilities may hinder AI adoption, Köbis notes. Given AI’s propensity for errors and bias reinforcement, policymakers should exercise caution in employing this technology for critical matters such as border asylum decisions.

“AI generates significant hype, leading many to perceive these algorithms as potent and objective. This misconception could result in excessive reliance on AI, even when its performance is subpar,” Köbis expresses concern.