AI is reshaping drug discovery - Now it must prove it can be trusted

Digital
responsibilities of AI icons over male hands typing at a laptop

In an era marked by mounting drug shortages and a historic shift away from mandatory animal testing, the pharmaceutical landscape is undergoing rapid transformation. Hybrid AI, technology that combines mechanistic modelling with AI, is emerging as a powerful approach to accelerate drug development. But as innovation pushes the boundaries of speed and scale, a vital question remains: can we use transparency as a way to prove safety and build public trust?

Trust hinges on transparency. In drug development, that means showing how models are trained, which data are used, and how predictions of safety and efficacy are generated. It doesn’t require publishing every line of code, but it does require clarity, so regulators can evaluate reliability, clinicians can understand assumptions, and patients can believe the results aren’t a black box.

The potential of hybrid AI

The promise of hybrid AI is significant. These systems process vast datasets, from clinical trial records to genomic profiles, to identify new drug candidates, simulate therapeutic responses, and personalise treatments for specific patient populations. This capability is particularly critical as the US faces shortages of life-saving medications, from antibiotics to paediatric formulations. Accelerated development timelines could be key to addressing these gaps.

However, this urgency brings complex scientific and ethical challenges. Animal studies and other preclinical models have long served as conservative gatekeepers in the drug approval process. Now, with the FDA beginning to phase out mandatory animal testing, in silico simulations and hybrid AI are stepping in to take their place. This shift places new weight on algorithmic decision-making in public health: a responsibility that requires not just accuracy, but clarity.

Many AI models, particularly deep learning systems, can yield impressive predictions, but often with limited interpretability. In high-stakes areas like oncology or cardiovascular disease, this “black-box” problem is more than inconvenient; it can be dangerous. Regulators, physicians, and patients must be able to understand the reasoning behind AI-driven recommendations. In this new phase of drug development, explainability isn’t optional, but essential.

Prioritising transparency alongside performance

To meet this challenge, a number of platforms are adopting hybrid approaches that prioritise transparency alongside performance. For example, some systems combine mechanistic models with machine learning to simulate drug behaviour in virtual patients, aiming to model pharmacokinetics (PK), pharmacodynamics (PD), toxicity, and efficacy with a high degree of biological relevance. Crucially, they also make visible how predictions are generated, highlighting influential variables, modelling assumptions, and uncertainty margins. These efforts represent an important step toward building trust in AI-assisted drug development.

We’ve already seen the consequences of moving too fast without adequate safeguards. In 2024, several AI-assisted therapies received emergency use authorisation to address supply shortages, but post-market complications in vulnerable populations revealed serious flaws in the model’s predictive reliability. The issue wasn’t just imperfect data; it was a lack of transparent validation that hindered course correction. The takeaway was clear: high accuracy isn’t enough; we need high clarity.

The FDA has started to address this concern. In April 2025, new draft guidance encouraged developers to document how AI models are trained, validated, and interpreted. Still, voluntary guidelines can only go so far. If hybrid AI is to play a long-term role in replacing legacy models, animal or otherwise, transparency must become a standard, not a suggestion.

Some platforms are already building toward this future. Alongside simulation accuracy, they are emphasising model validation, regulatory engagement, and auditability. Features such as explainability reports, independent validation workflows, and stakeholder engagement mechanisms are being developed to align with regulatory needs and ethical expectations. These practices help ensure AI-assisted insights are not only fast, but defensible and trustworthy.

Responsible hybrid AI

This is about more than regulatory checkboxes. Public trust is fragile. While the COVID-era vaccine rollout was a scientific triumph, it also highlighted deep-seated scepticism about speed and safety. When paired with current headlines about contamination-related shortages, recalls, and fragile supply chains, it becomes clear that transparency is not a barrier to innovation; it enables it.

So, what does responsible hybrid AI look like?

  • Interpretability by design: Models that combine data-driven insights with mechanistic frameworks enable predictions that are not only accurate, but explainable.
  • Regulatory-ready validation: Documented and traceable workflows support better understanding and faster, more confident decision-making by regulators.
  • Inclusive collaboration: Engaging clinicians, ethicists, and patient advocates ensures AI development reflects real-world complexity and diverse perspectives.

Transparency may slow down certain processes, but it accelerates acceptance, adoption, and impact in the long run. Patients are more likely to embrace AI-powered treatments if they understand how they work. Regulators can move faster when models are auditable. And clinicians adopt new tools with more confidence when the science behind them is visible and verifiable.

As drug development evolves to meet urgent global demands, explainability must be a foundational principle. Speed and safety are not at odds, but realising both requires systems designed for accountability from the start.

About the author

Dr Jo Varshney is the founder and CEO of VeriSIM Life and a recognised pioneer in AI-driven drug development. Born into a pharmacological family in India, she developed a lifelong passion for science and technology that led her to earn a DVM and PhD in Comparative Oncology and Genomics from the University of Minnesota, along with advanced training in Comparative Pathology and Computational Sciences at Penn State and UCSF. She founded VeriSIM Life to close the critical gap between preclinical research and clinical success. Dr Varshney’s leadership has earned wide recognition, including being named among the Top 100 Women in AI and one of the Most Influential Women in Business by the San Francisco Business Times. A frequent keynote speaker and thought leader, she is committed to advancing diversity, equity, and inclusion while inspiring the next generation of scientists and entrepreneurs.

Image
Jo Varshney
profile mask
Jo Varshney