A fresh set of guidelines is intended to enhance the utilization of datasets for the development of Artificial Intelligence (AI) health technologies while minimizing the risk of AI bias.
Patients will gain more advantages from advancements in medical artificial intelligence (AI) if these new internationally-recognized recommendations are implemented.
A recent publication in The Lancet Digital Health and NEJM AI introduces a new framework aimed at refining the application of datasets in the creation of AI health technologies and addressing the risk of inherent biases in these systems.
While groundbreaking medical AI technologies hold the potential to enhance diagnosis and treatment, research has indicated that certain AI systems may be biased, resulting in effective performance for some individuals while failing others. This discrepancy risks leaving certain groups behind or can even lead to adverse outcomes when such technologies are applied.
The international initiative, ‘STANDING Together (STANdards for data Diversity, INclusivity and Generalisability),’ has issued recommendations developed through a research collaboration of over 350 experts across 58 nations. These guidelines aim to ensure that medical AI is both safe and effective for all individuals. The recommendations address several factors that can contribute to AI bias, including:
- Promoting the development of medical AI utilizing healthcare datasets that accurately reflect the entire population, particularly marginalized and underserved communities;
- Assisting those who publish healthcare datasets in identifying potential biases or shortcomings within the data;
- Empowering developers of medical AI technologies to evaluate the appropriateness of datasets for their specific applications;
- Establishing protocols for testing AI technologies to determine if they exhibit bias, leading to varying levels of effectiveness across different populations.
Dr. Xiao Liu, Associate Professor of AI and Digital Health Technologies at the University of Birmingham and the study’s Chief Investigator, stated:
“Data is like a reflection of reality. When it is distorted, it can amplify societal biases. However, merely correcting data to address the issue is akin to wiping a mirror to erase a stain on your shirt.”
“To create meaningful change in health equity, we must focus on correcting the root cause rather than merely changing the reflection.”
The STANDING Together recommendations emphasize the importance of utilizing diverse datasets that reflect the wide range of individuals who will use medical AI. AI systems may perform poorly for populations that are not adequately represented within the datasets. Minority groups are particularly likely to be underrepresented, making them more susceptible to the negative impacts of AI bias. The guidelines further outline how to identify individuals who could be harmed by medical AI systems to mitigate these risks.
The STANDING Together initiative is spearheaded by researchers from University Hospitals Birmingham NHS Foundation Trust and the University of Birmingham, UK. Their research has included collaboration from over 30 organizations globally, encompassing universities, regulatory bodies from the UK, US, Canada, and Australia, patient advocacy groups, charities, and both small and large health tech companies. Funding for this initiative has been provided by The Health Foundation and the NHS AI Lab, with additional support from the National Institute for Health and Care Research (NIHR), the research partner of the NHS, public health, and social care.
Alongside the recommendations, a commentary authored by STANDING Together’s patient representatives in Nature Medicine stresses the crucial role of public engagement in shaping AI research in healthcare.
Sir Jeremy Farrar, Chief Scientist of the World Health Organisation, remarked:
“Creating diverse, accessible, and representative datasets for the responsible advancement and testing of AI is a global necessity. The STANDING Together recommendations represent a significant progression towards achieving equity in health AI.”
Dominic Cushnan, Deputy Director for AI at NHS England, added:
“It’s vital that we maintain transparent and representative datasets to promote the ethical and equitable development and application of AI. The STANDING Together guidelines are exceptionally timely as we harness the promising potential of AI tools, and the NHS AI Lab is fully committed to putting their practices into action to lessen AI bias.”
The recommendations were released today (18th December 2024) and are accessible to the public through The Lancet Digital Health.
These guidelines will be particularly beneficial for regulatory bodies, health and care policy organizations, funding agencies, ethical review committees, academic institutions, and government departments.