Regulating the invisible hand: Inside the FDA's AI playbook
Last month, the United States Food and Drug Administration (FDA) dropped draft guidelines for AI in regulatory decision-making in the pharma industry. But what does this guidance entail, and how likely is it to have a major effect on drugmakers?
Artificial intelligence has always presented a prickly problem for regulatory agencies tasked with ensuring the safety of not only the products that end up in consumers’ hands, but the processes by which those products are created.
AI’s greatest strength is that it can work faster than humans by automating cumbersome tasks, whether that’s sifting through molecules for one with the right effects and safety profile or sorting through large data sets to find patients for a clinical trial. But when work is automated, it’s also obscured in the “black box” of an algorithm. And in the age of machine learning, not even the designer of that algorithm necessarily knows exactly how it works.
This makes regulators, who want to examine every step of the drug creation process for safety and accuracy, understandably nervous. But regulators also recognise that AI is here to stay, and its efficiency gains are too big to ignore.
“The FDA is committed to supporting innovative approaches for the development of medical products by providing an agile, risk-based framework that promotes innovation and ensures the agency’s robust scientific and regulatory standards are met,” Dr Robert M. Califf, who was FDA Commissioner at the time, said in a statement. “With the appropriate safeguards in place, artificial intelligence has transformative potential to advance clinical research and accelerate medical product development to improve patient care.”
The draft guidance that came out on 6th January this year seeks to require pharma companies to be transparent about how they use AI, at least in the scope of regulatory decision-making, and gives them a framework for evaluating models based on the level of risk they pose to patient safety.
What does the guidance say?
The 23-page guidance lays out a broad process for creating and executing a credibility assessment plan for an AI model – essentially the way that a company intends to ensure and prove that its model is and continues to be accurate in its output.
The guidance provides a couple of hypothetical examples, but it’s meant to be broadly applicable to the full range of potential AI use cases to “produce information or data to support regulatory decision-making regarding safety, effectiveness, or quality for drugs.” Notably the guidance is not concerned with the use of AI in drug discovery, as that comes well before regulatory decision-making.
For those use cases that do fall into its scope, FDA provides a seven-step framework:
- Define the “Question of Interest”. What question is this model being employed to answer?
- Define the “Context of Use”. How will the model be used: to make decisions by itself or in conjunction with other sources of evidence?
- Assess the “AI Model Risk”. A function of both the potential fallout of a model failure and the level of involvement the AI has in ultimate decision-making.
- Develop a plan to establish model credibility. Describe the model, the data used to train it, and the training process, as well as the plan for evaluating the model to the FDA.
- Execute that plan.
- Document the results of the credibility assessment. Let FDA know how the model performed.
- Determine the adequacy of the model for the stated context of use. Taking that information into account, determine whether the model needs to be further developed, needs to be used in a lower-risk context, or can be used as planned.
Step 4 is by far the most fleshed out part of the guidance, requiring sponsors to describe their AI models in detail, give a rationale for choosing a specific modelling approach, and describe the data sets used to train and tune the model. It includes instructions to avoid overfitting and bias, and to make sure there’s little to no overlap between training data and testing data.
“These guidelines aim to foster the development of AI models that are transparent, accurate, and safe, enhancing regulatory decision-making,” VIvek Ahuja, SVP Compliance at EVERSANA, told pharmaphorum in an email.
In addition, a special section of the guidance lays out considerations for a risk-based approach to lifecycle maintenance of AI models – making sure the outputs don’t drift over time and become less accurate or reliable. This section states that, depending on the risk level, companies may be required to re-do the credibility assessment in response to changes in inputs or use cases, or just after a certain amount of time has passed.
Finally, the guidance lists the different parts of FDA that sponsors should engage with depending on the nature of their AI model.
What will be the real impact of these guidelines?
Of course, what was released in January is just a draft guidance and the FDA is seeking comments – it has already received about 14, which are publicly viewable. Furthermore, the Trump administration has made clear it plans to make big staffing cuts at the FDA, which could affect or even shutter this process. Early cuts have already affected workers focused on AI, albeit on the medical device side, rather than the drug side.
But, assuming that the regulator survives, and this guidance lives on in a similar form, what’s the real takeaway for pharma companies?
Broadly speaking, it probably won’t be too onerous for pharma companies, who are already used to extensive process and results documentation in other areas.
“A lot of the things that the guidance is asking for are probably things that most firms in the industry are already thinking about in terms of having transparency and explainability around the systems they're developing. Even though more information might be required for a company that's implementing something with AI, that additional information is not more onerous than they might have to do with conventional non-AI approaches,” Nikhil Pradhan, senior counsel at Foley & Lardner, told pharmaphorum.
Furthermore, some of the most impactful uses of AI in drug development, namely the use of AI in target selection and drug discovery, don’t even fall under the guidance’s purview. And on the plus side, pharma companies that move quickly to embrace this guidance could benefit competitively.
“The pharma sector will face initial challenges related to technology onboarding costs upon implementing the guidance, but companies that develop robust, credible AI models in alignment with FDA consultations may gain an early mover advantage,” Ahuja said.
There is one impact that could be meaningful, however, and that’s around intellectual property, Pradhan said. Companies have two major options for protecting proprietary models and algorithms: patenting them or keeping them a trade secret. This guidance could essentially take away the second option for any model to which the guidance applies.
“If a company has an AI model that's used for drug development and it's predicting or classifying who should get certain treatments or things like that and, maybe historically, you could just treat it like a black box,” he said. “But now, if they have to tell the FDA about it, then it might be worth putting more resources into their patent protection.”
Michael Young, co-founder of Lindus Health, a next-gen CRO that uses a lot of AI, says the guidance is a welcome affirmation that the agency is taking AI seriously.
“The FDA’s latest guidance further cements its leadership in setting standards for AI in life sciences. By establishing a risk-based framework, the agency ensures that AI models used in regulatory decision-making are transparent, credible, and rigorously validated. The emphasis on context of use underscores the need for AI applications to be assessed based on their specific impact, with higher-risk implementations requiring more stringent validation. Encouraging early engagement with the FDA provides companies with a clear pathway to align AI strategies with regulatory expectations. Additionally, the guidance highlights the importance of ongoing monitoring and lifecycle management, recognising that AI models continuously evolve,” he told pharmaphorum in an email.
Broadly, the industry seems to support this guidance, which is not surprising considering it was created with industry input, based on feedback from two 2023 discussion papers and an FDA-sponsored expert workshop in December 2022.
Hopefully the clarity provided by this guidance will encourage more innovative uses of AI to improve drug development.
About the author
Jonah Comstock is a veteran health tech and digital health reporter. In addition to covering the industry for nearly a decade through articles and podcasts, he is also an oft-seen face at digital health events and on digital health Twitter.
Supercharge your pharma insights: Sign up to pharmaphorum's newsletter for daily updates, weekly roundups, and in-depth analysis across all industry sectors.
Want to go deeper?
Continue your journey with these related reads from across pharmaphorum
Click on either of the images below for more articles from this edition of Deep Dive: Research and Development 2025