Scientists note that developing a comprehensive understanding of the dangers associated with artificial intelligence (A.I.) remains a challenge. They advocate for adopting a complex systems view to effectively evaluate and address these risks, especially given the uncertainties and intricate interactions between A.I. and society over the long term.
As artificial intelligence increasingly influences various facets of our lives, specialists are expressing growing concern regarding its potential harms. Some of these threats are immediate, while others may not materialize for months or even years. Experts mention in The Royal Society’s journal that a unified understanding of these dangers is still hard to establish. They suggest using a complex systems approach to more accurately evaluate and mitigate these risks, considering the long-term uncertainties and complicated relationships between A.I. and society.
“To grasp the risks associated with A.I., we must appreciate the complicated dynamics between technology and society. It’s essential to navigate the intricate, evolving systems that influence our choices and actions,” explains Fariba Karimi, one of the article’s co-authors. Karimi heads the Algorithmic Fairness research team at the Complexity Science Hub (CSH) and is a professor of Social Data Science at TU Graz.
“We need to consider not just which technologies to implement and how, but also how to adjust the social environment to harness the positive possibilities. Discussions on economic policies should incorporate the potential benefits and hazards of A.I.,” adds Dániel Kondor, the study’s lead author from CSH.
Wider and Long-Term Dangers
The article published in Philosophical Transactions A indicates that existing risk evaluation frameworks typically focus on immediate, particular harms, such as bias and safety issues. “These frameworks tend to neglect broader, long-term systemic threats that could arise from the widespread use of A.I. technologies and their interaction with the social environment in which they operate,” says Kondor.
“In our paper, we aimed to balance the short-term views on algorithms with longer-term perspectives on their societal impacts. It’s about understanding both the immediate and systemic effects of A.I.,” Kondor adds.
Real World Implications
To highlight the potential dangers of A.I. technologies, the researchers examine a case from the Covid-19 pandemic in the UK, where a predictive algorithm was utilized for school examinations. The new method was thought to be “more objective and therefore fairer [than having teachers gauge their students’ performance], as it relied on statistical analysis of past student performances,” according to the research.
However, the implementation of the algorithm revealed significant issues. “Once the grading algorithm was in use, inequalities became very apparent,” notes Valerie Hafez, an independent researcher and co-author. “Students from underprivileged communities were disproportionately impacted by the misguided effort to address grading inflation, with 40% of students receiving lower marks than they had reasonably expected,” she adds.
Hafez points out that feedback from the consultation report shows a disconnect between the long-term risks perceived by teachers—specifically, the ramifications of receiving lower grades than warranted—and the risks seen by the algorithm’s designers, who were more focused on grade inflation, the consequent pressure on higher education, and a decline in trust regarding students’ actual capabilities.
Scale and Context
This scenario highlights several critical challenges associated with deploying large-scale algorithmic solutions, according to the researchers. “One important factor to consider is the scale—and context—because algorithms have the ability to scale: they can be applied in varying contexts, even when those contexts are vastly different. The original context of development does not simply vanish; it influences all subsequent applications,” Hafez explains.
“Long-term risks are not merely an accumulation of short-term risks; they can grow exponentially over time. However, through computational models and simulations, we can gain valuable insights to more effectively evaluate these dynamic risks,” Karimi adds.
Computational Models and Community Involvement
This represents one of the methods the researchers propose for understanding and analyzing the risks linked to A.I. technologies, both in the immediate and distant future. “Computational models—such as those analyzing the impact of A.I. on minority representation in social networks—can illustrate how biases in A.I. systems result in feedback loops that perpetuate societal inequalities,” Kondor clarifies. Such models serve to simulate potential risks, providing insights that are challenging to derive from conventional assessment approaches.
Moreover, the authors stress the significance of engaging both the public and experts from diverse fields in the risk evaluation process. Competency groups—small, varied teams that bring together different viewpoints—can be crucial for enhancing democratic participation and ensuring the risk assessments reflect the perspectives of those most affected by A.I. technologies.
“A broader concern is fostering social resilience, which can enhance the effectiveness of A.I.-related discussions and decision-making while avoiding mistakes. Ultimately, social resilience may hinge on many issues that may not directly relate to artificial intelligence,” Kondor reflects. Encouraging participatory decision-making can be an essential aspect of boosting resilience.
“Once we begin to recognize A.I. systems as sociotechnical, we cannot separate the individuals affected by these systems from their ‘technical’ components. Doing so removes their opportunity to influence the classification systems placed upon them, thereby denying these individuals the power to contribute to the shaping of systems that meet their needs,” asserts Hafez, an A.I. policy officer at the Austrian Federal Chancellery.
Study Information
The research titled “Complex systems perspective in assessing risks in A.I.,” authored by Dániel Kondor, Valerie Hafez, Sudhang Shankar, Rania Wazir, and Fariba Karimi, was published in Philosophical Transactions A and is accessible online.