Global regulators seek permanent working group on AI
Rapid advances in the use of artificial intelligence (AI) in healthcare and medicines development are at risk of outstripping current regulatory frameworks, according to a new report.
The International Coalition of Medicines Regulatory Authorities (ICMRA) has recommended that a new permanent working group be set up to keep tabs on the use of AI and make sure that regulations can accommodate new developments.
Key considerations are the transparency of algorithms used in AIs, and the risks of AI failures could pose to pharmaceutical development and ultimately patient health, says the group in a recently published report.
Along with a group to monitor AI developments across the board in medicines development – from preclinical, clinical development, pharmacovigilance and clinical use – the ICMRA also says that regulators may need to apply a risk-based approach to assessing and regulating AI.
That will require a sufficient level of understandability and access to algorithms and underlying datasets, which could require changes to legal and regulatory frameworks, and close consultation with ethics committees and AI expert groups.
Sponsors and developers of AIs – including pharmaceutical companies – should set up governance structures to oversee algorithms and AI deployments that are closely linked to the benefit/risk of a medicinal product.
That should include a committee to understand and manage the implications of higher-risk AI, for example when it can “learn, adapt or evolve autonomously.”
The document also says that regulatory guidelines for AIs should be developed in areas such as data provenance, reliability, transparency and understandability, pharmacovigilance, and real-world monitoring of patient functioning.
In pharmacovigilance applications, for example, a balance must be struck between relying on AI to identify safety signals, and making sure there is still sufficient human oversight of signal detection and management, according to the ICMRA.
The new document has been published as other authorities have also been looking at the challenges of adapting to the rapid adoption and evolution of AI.
Earlier this year, the EU laid out proposals for a European approach to regulating AI across the board, including healthcare, in an effort to develop a legal framework for the technology that will address its potential benefits as well as “new risks or negative consequences for individuals or the society.”
The FDA meanwhile has also laid out an action plan in January for the regulation of AI and machine learning-based software, which includes publishing guidance on how changes to algorithms will be managed.
Don't miss your daily pharmaphorum news.
SUBSCRIBE free here.