WHO cautions against overestimating AI's impact on health
Artificial intelligence (AI) has the potential to improve the delivery of healthcare, but only if ethics and human rights are put at its heart of its design and use, according to the World Health Organisation (WHO).
Digital technologies and AI are already being deployed to improve the speed and accuracy of diagnosis and screening for diseases, assist with clinical care, support R&D, and guide public health policy, according to a just-published WHO document called Ethics and governance of artificial intelligence for health.
They also promise to empower patients to take greater control of their own health care, and bridge gaps in access to health services in some countries, says the report.
However, "like all technology AI can be misused and cause harm," according to Dr Tedros Adhanom Ghebreyesus, WHO Director-General, who said the new report will "guide countries on how to maximise the benefits of AI, while minimising its risks and avoiding its pitfalls."
Chief among its concerns? Unethical collection and use of health data, biases that may be built into algorithms, and risks to patient safety, cybersecurity, and the environment.
It is also worried about "techno-optimism" – becoming over-reliant on technological solutions to complex problems which could exacerbate unequal distribution access to health care technologies associated with "geographies, gender, age or availability of devices."
"The unregulated use of AI could subordinate the rights and interests of patients and communities to the powerful commercial interests of technology companies or the interests of governments in surveillance and social control," warns the WHO.
AI systems trained primarily on data collected from individuals in high-income countries may not perform well for individuals in low- and middle-income settings, it adds, warning that AI must not be deployed at the expense of core investments and strategies required to achieve universal health coverage.
The report lays out six guiding principles that should be adhered to in order to benefit from AI in health, and avoid the downsides.
Top of the list is protecting human autonomy, in other words making sure humans remain in control of decisions, and ensuring that proper consent is given and privacy is protected.
AIs must satisfy regulatory requirements for safety, accuracy and efficacy – which means measures of quality control have to be established – and they must be deployed with clear and transparent information in order to allow public debate on the technology.
Inclusiveness and equitable access is key, as is the allocation of responsibility for those who must ensure AIs are used appropriately, and the development of a framework for investigation and redress if things go wrong.
Finally, AIs should be continuously assessed to make sure they respond adequately and appropriately to expectations and requirements, and they should be designed to minimise environmental consequences and increase energy efficiency.
"Ultimately, guided by existing laws and human rights obligations, and new laws and policies that enshrine ethical principles, governments, providers, and designers must work together to address ethics and human rights concerns at every stage of an AI technology's design, development, and deployment," according to the WHO.