Revolutionary Benchmarking Breakthrough Tackles the Toughest Quantum Challenges

Predicting the behavior of many interacting quantum particles is a complicated process but is key to harness quantum computing for real-world applications. Researchers have developed a method for comparing quantum algorithms and identifying which quantum problems are the hardest to solve. Predicting the behavior of many interacting quantum particles is a complicated process but is
HomeHealthUnpacking the Challenge: Why Countering Election Misinformation Falls Flat

Unpacking the Challenge: Why Countering Election Misinformation Falls Flat

A new computational model has been developed to evaluate the elements that influence whether efforts to debunk false claims will convince individuals to reconsider their beliefs regarding the legitimacy of an election.

When there is doubt surrounding election results, individuals who question the outcome can often be influenced by authoritative figures who advocate for either side. These authorities can include independent monitors, political leaders, or media outlets. Nonetheless, these debunking endeavors do not always achieve their intended goal; in some instances, they may even cause individuals to hold on more firmly to their initial beliefs.

Researchers at MIT and the University of California at Berkeley, comprising both neuroscientists and political scientists, have created a computational model to analyze the determinants of whether debunking attempts will lead people to alter their views on the legitimacy of an election. Their research indicates that, although debunking often falls short, it can succeed under certain circumstances.

For example, the model indicated that debunking tends to work better when individuals are less confident in their initial beliefs and when they view the authority as impartial or genuinely motivated by a quest for accuracy. It is especially effective when an authority endorses a result that contradicts an assumed bias they hold; an example of this is when Fox News affirmed Joseph R. Biden’s victory in Arizona during the 2020 U.S. presidential election.

“When people observe debunking, they perceive it as a human action and interpret it similarly to their understanding of human behavior,” explains Rebecca Saxe, the John W. Jarve Professor of Brain and Cognitive Sciences and senior author of the study at MIT’s McGovern Institute for Brain Research. “We adopted a straightforward, general model of how individuals interpret others’ actions, which is sufficient to analyze this intricate phenomenon.”

The outcomes of this research could play an important role as the U.S. readies itself for the upcoming presidential election scheduled for November 5, providing insights into the scenarios most likely to lead to acceptance of the election results.

Setayesh Radkani, a graduate student at MIT, is the lead author of the study, which is published in a special election-focused edition of PNAS Nexus. Marika Landau-Wells, who earned her PhD at MIT and now serves as an assistant professor of political science at UC Berkeley, is also a co-author of the paper.

Understanding motivation

In their investigations into election debunking, the MIT team employed an innovative perspective, extending Saxe’s extensive studies on “theory of mind”—how individuals consider the thoughts and motivations of others.

As part of her PhD research, Radkani has been crafting a computational model to capture the cognitive processes that occur when individuals witness others receiving punishment from an authority. Each individual’s interpretation of punitive actions can vary based on their pre-existing beliefs about the situation and the authority involved. Some might view the authority as justly punishing wrongdoing, while others might see it as overstepping boundaries to impose an unfair penalty.

Last year, after attending an MIT workshop on societal polarization, Saxe and Radkani conceived the idea of applying their model to examine how individuals respond to authorities attempting to influence their political beliefs. They brought Landau-Wells on board to assist with this initiative, and she proposed modifying the model for examining belief debunking concerning the legitimacy of election results.

The computational model designed by Radkani employs Bayesian inference, enabling it to continuously refine its predictions about people’s beliefs as they receive new information. This method treats debunking as an action undertaken by someone for their reasons. Observers of the authority’s claims interpret the motives behind these actions, which can influence whether or not they alter their beliefs about the election outcome.

Additionally, the model does not presume that any belief is inherently wrong or that any group is acting irrationally.

“We only assume that there are two groups with differing opinions on a subject: one believes the election was fraudulent, and the other does not,” Radkani explains. “Aside from that, these groups are comparable. They share opinions regarding the authority’s motives and their levels of motivation concerning those motives.”

Researchers simulated over 200 different situations where an authority attempted to challenge a belief held by one group about the legitimacy of an election’s outcome.

In every simulation, they adjusted the certainty levels regarding each group’s original beliefs and varied their perceptions toward the authority’s motivations. In some scenarios, the groups viewed the authority as aiming for accuracy, while in others, they did not. They also adjusted how biased each group believed the authority was and the intensity of those beliefs.

Fostering consensus

In each scenario, the researchers utilized the model to predict how each group would react to a series of five statements made by an authority attempting to convince them of the election’s legitimacy. Most simulations revealed that beliefs remained polarized, with some scenarios resulting in even further polarization. This polarization could even extend to unrelated topics outside the original context of the election.

However, under specific conditions, debunking efforts proved effective, and beliefs converged toward an agreed-upon outcome, particularly when individuals started with a lower level of certainty regarding their original beliefs.

“People who are very certain become resistant to change. Essentially, much of this authoritative debunking ends up being ineffective,” says Landau-Wells. “Yet, many individuals exist in a state of uncertainty. They harbor doubts but lack firm convictions. One takeaway from this paper is that we’re in a place where the model indicates it’s possible to influence people’s beliefs and guide them toward the truth.”

Additionally, belief convergence is fostered when individuals trust that the authority is unbiased and highly driven by a quest for accuracy. It becomes even more convincing when the authority makes a statement that contradicts assumptions about their bias—like Republican governors asserting that elections in their states were fair despite a Democratic victory.

As the 2024 presidential election approaches, grassroots initiatives have emerged to train nonpartisan election observers who can endorse the legitimacy of the elections. According to the researchers, these types of organizations are well-positioned to influence those who harbor doubts about the election’s validity.

“These initiatives aim to train individuals to be independent, impartial, and devoted to the truth of the election outcome above all else. These are the kinds of entities you want. We hope they succeed in being perceived as unbiased and truthful, as in this realm of uncertainty, their voices can guide people toward an accurate conclusion,” says Landau-Wells.

This research was partially funded by the Patrick J. McGovern Foundation and the Guggenheim Foundation.