AI is coming and regulators are getting prepared

Digital
AI and digitisation

The UK’s MHRA recently published its guidance on its approach to both regulating the use of AI for drug and medical devices, and how it can utilise the technology to improve its regulating efficiency. Ben Hargreaves looks at the key points raised in the document and how broadly the agency expects to see AI applied.

The pace of development in the AI space has accelerated in recent years. The potential for the technology to optimise basic functions, such as online searches, has seen billions invested into the area in a short time. The level of interest has also seen the technology come under increasing scrutiny, as governments and their regulatory bodies grapple with the challenge of how to regulate the area without stifling innovation.

The pharmaceutical industry is not immune to such challenges. The adoption and innovation in AI has sped up to the same degree as wider industries, which leaves medicines regulators needing to adapt to a raft of new medicines and medical devices that use or are developed with the technology. One of the key advantages of AI is that it can ‘learn’ and adapt from the data it is fed, eventually becoming more effective at its task as it processes information. However, for regulatory agencies set up towards fixed products that are traditional, small molecule medicines, this represents a new challenge.

Though the idea of an evolving product appears more applicable to digital therapeutics and medical devices that are often based around digital technology, this is also becoming true for advanced therapeutics. An example of this is MSD and Moderna’s potential cancer vaccine, which uses AI to target the antigens specific to each patient. With such products currently racing towards commercialisation, regulators are being pressed to adopt and adapt guidelines to this new class of treatments powered by AI. A recent case has emerged in the UK, after the government published a white paper that outlined its guidance on the future of AI within the country and on regulation.

Government interest

The white paper positions the UK as a leader in the development of AI, with it stating that the country places third in the world for AI R&D, as well as being home to a third of Europe’s total AI companies and having twice as many as any other European country. Within the paper, this is placed in the context of the UK aiming to become an ‘AI superpower’ and the guidance provided is part of an effort to ‘create the right environment’ for this through regulation.

The paper outlines five principles to guide and inform the use of AI across all sectors of the UK economy:

  • Safety, security, and robustness
  • Appropriate transparency and explainability
  • Fairness
  • Accountability and governance
  • Contestability and redress

The UK government called on regulators to implement these principles into their framework with a breakdown of each point into multiple objectives, including considering good cybersecurity practices and demanding information on the use of the data.

Taking AI action

The UK’s Medicines and Healthcare products Regulatory Agency (MHRA) published its response on implementing the AI white paper principles. In turn, the agency noted three ways in which it will need to engage with the rise of AI technology. The first is within its direct capacity to regulate AI where it is used for a medical purpose, such as a general medical device that must meet existing regulations on medical devices. The second sees the MHRA acknowledge that as a public service, it will also be possible for it to improve its own services through the adoption of the technology. Lastly, the agency notes that AI will feature in the general operations of the companies that it regulates and therefore it needs a sufficient understanding of the technology to adequately perform its role.

In terms of the actions that the agency will take to adapt to a new wave of AI-influenced products, the MHRA stated that it would launch a pilot program in spring 2024. The project, called AI-Airlock, was announced last year and is described as a regulatory sandbox for AI as a medical device (AlaMD) products. The objective of the project is to identify regulatory challenges to AlaMD and to work with stakeholders, including regulatory, governance and assurance organisations in healthcare, to mitigate any risks that are uncovered.

Additional actions that the MHRA plans to undertake are the release of ‘clear guidance’ on cyber security, which it revealed is due for publication by spring 2025. Another publication expected by spring 2025 is detailed guidance on applying human factors for AlaMD products.

Internal applications

The MHRA also elaborated on its plans for the use of AI to improve its own internal operations. One way it noted for AI to streamline its activities was on the initial assessment of the documents that are submitted as part of the application for marketing authorisation or approval. The agency stated that it is exploring the use of supervised machine learning to assist the human assessors of these documents. In addition, the agency mentioned using generative AI and large language models to enhance and extend the creation of actionable insights from real-world data, as well as using such methodologies within its vigilance systems.

“AI offers us the opportunity to improve the efficiency of the services we provide across all our regulatory functions from regulatory science, through enabling safe access for medicines and medical devices, to post-market surveillance and enforcement,” Laura Squire, chief quality and access officer at the MHRA, said. She continued: “While taking this opportunity, we must ensure there is risk proportionate regulation of AIaMD which takes into account the risks of these products without stifling the potential they have to transform healthcare. Increasingly, we expect AI to feature in how those we regulate undertake their activities and generate evidence and we therefore need to ensure we understand the impact of that in order to continue to regulate effectively.”

On its last stated objective, to understand the technology’s use in the industry, the agency noted that since 2020 it has made efforts to engage with the pharmaceutical industry in its use of AI for vigilance purposes. The document also raised potential challenges, including the increased pace of new medicines developed through the use of AI, as well as changing clinical trial design and as an enabler of personalised medicine.

The agency concluded that it expects its membership of the International Medical Devices Regulators Forum (IMDRF) to enable it to keep on top of developments in the field, in particular its relationship with the US FDA and Health Canada. Both the FDA and the EMA have recently published guidance papers on their current thinking on AI and its role within drug and medical device development. The pace of development in the field is placing pressure on regulators to prepare their frameworks, with expectations that AI is going to find a place at all stages of drug development and delivery – from finding new assets, to creating personalised therapies, and to the very basics of readying a dossier for regulators.