WHO Issues Guidelines on Ethical Use of Large Multi-Modal Models in Healthcare AI
WHO's Guidance on LMMs Aims to Foster Responsible Integration of Advanced AI in Healthcare for Improved Patient Outcomes.
In response to the rapid growth of generative artificial intelligence (AI) technology, particularly Large Multi-Modal Models (LMMs), the World Health Organization (WHO) has released comprehensive guidance on the ethics and governance of these advanced AI systems within the healthcare sector.
The guidance, comprising over 40 recommendations, targets governments, technology companies, and healthcare providers to ensure the responsible and beneficial application of LMMs in promoting and safeguarding public health.
Understanding Large Multi-Modal Models (LMMs)
LMMs are a subtype of generative AI that can process various data inputs, including text, videos, and images, generating diverse outputs beyond the limitations of the input data. These models, notable for their ability to mimic human communication and perform unprogrammed tasks, have seen rapid adoption, with platforms like ChatGPT, Bard, and Bert gaining prominence in 2023.
WHO Chief Scientist Emphasizes the Need for Accountability
Dr Jeremy Farrar, WHO Chief Scientist, highlights the potential of generative AI technologies to enhance healthcare but underscores the importance of acknowledging associated risks. The guidance aims to provide transparent information and policies to manage the design, development, and utilization of LMMs, emphasizing the necessity of identifying and mitigating risks for improved health outcomes and addressing existing health inequities.
Applications and Risks Outlined
The WHO guidance delineates five broad applications of LMMs in healthcare, including diagnosis and clinical care, patient-guided use, clerical and administrative tasks, medical and nursing education, and scientific research and drug development. However, it also acknowledges risks such as the potential for false, inaccurate, biased, or incomplete information, as well as concerns about data quality and bias in training datasets.
Broader Risks to Health Systems
In addition to individual risks, the guidance underscores broader risks to health systems, including issues of accessibility, affordability, and the potential for ‘automation bias.’ The latter refers to situations where healthcare professionals and patients might overlook errors or improperly delegate decisions to LMMs. The document also highlights cybersecurity risks that could compromise patient information and the overall trustworthiness of AI algorithms in healthcare provision.
Stakeholder Engagement and Key Recommendations
To ensure the development and deployment of safe and effective LMMs, the WHO emphasizes the need for engagement from various stakeholders, including governments, technology companies, healthcare providers, patients, and civil society. Key recommendations include governments investing in public infrastructure for AI development, enforcing ethical standards through laws and regulations, and introducing mandatory post-release auditing and impact assessments by independent third parties.
For developers, the guidance stresses the importance of inclusive, transparent design involving potential users and stakeholders from the early stages of AI development. Developers should also ensure that LMMs are designed for well-defined tasks with accuracy and reliability while being able to predict and understand potential secondary outcomes.
In conclusion, the WHO’s guidance seeks to strike a balance between leveraging the potential benefits of LMMs in healthcare while responsibly managing the associated risks, promoting ethical considerations, and safeguarding patient interests.