Why security-first design is the fix to stalling AI pilot tools

As artificial intelligence continues to reshape the pharmaceutical industry, it presents both opportunities and challenges. The ability to streamline clinical trials, personalise treatments, and accelerate drug development is groundbreaking. However, the security challenges associated with integrating AI into highly regulated and sensitive healthcare ecosystems are also significant.
A recent report found that 64% of pharmaceutical executives are hesitant to integrate AI into drug development due to concerns over security. That hesitation is understandable, especially as 60% of AI budgets are outpacing IT spending, emphasising a misalignment between AI ambitions and IT readiness.
A security-first approach
For AI to be transformative in the pharmaceutical industry, it must be designed with a security-first approach. This means embedding data protection and compliance into the architecture of AI systems from the earliest stages of development, rather than as an afterthought once the model is ready to deploy.
In an industry that deals with large volumes of sensitive patient and clinical data, the stakes are too high to take a reactive approach to security. A single flaw, whether in data collection, model training, or interface deployment, can undermine public trust, violate privacy laws, and compromise entire programmes. In drug development, where years of R&D and billions of dollars are on the line, the consequences of a security disruption can extend far beyond financial loss.
Building secure AI isn’t just about meeting regulatory checkboxes. It’s about anticipating evolving threats, ensuring data integrity, and creating transparent systems that stakeholders can trust.
Too often, companies approach AI with the mindset of “test now, secure later”. This results in tools that fail to scale because they can’t meet the rigorous data privacy, security, and compliance demands of real-world deployment.
We see this as only 30% of healthcare AI pilot tools ever reach full production. Many models perform well in testing, but fail when introduced to enterprise systems. Successfully transitioning to production isn’t just about the technology, but also managing the challenges and risks related to compliance, security, and the ability to scale operations.
AI models require access to vast datasets, which are often stored in siloed systems with differing access controls, data formats, and consent requirements, making integration and interoperability difficult. On top of this, companies are forced to navigate global data privacy regulations and emerging AI-specific laws demands, which requires legal expertise and a deep understanding of the underlying technology. Additionally, infrastructure readiness is another significant hurdle, as many legacy IT systems were not designed to support the demands, bandwidth, and real-time analytics required by modern AI tools.
A fundamental rethinking of AI deployment
Overcoming these barriers to implementation requires more than band-aids. It calls for a fundamental rethinking of how we build and deploy AI in healthcare, starting with cross-functional collaboration. Security and privacy teams must be involved from day one, working alongside data scientists, engineers, and compliance leads to design systems that are robust, transparent, and resilient.
Every user, device, and dataset must be continuously verified and validated. AI solutions should embrace a privacy-by-design approach, integrating data minimisation, anonymisation, and consent management from the start, rather than as afterthoughts. Explainability is also crucial, particularly for areas of drug development, where AI outputs must be effectively communicated to meet regulatory requirements and build trust among clinicians, regulators, and patients. Ultimately, scalable compliance frameworks are essential to keep pace with evolving regulations, allowing organisations to efficiently navigate the requirements of multiple jurisdictions and regulatory bodies.
Security should not be seen as a barrier to innovation, but rather as the foundation that allows innovation to scale responsibly. As the pharmaceutical industry grapples with AI integration, we must resist the urge to race toward functionality without first building trustworthiness.
If we hope to bring AI’s full potential into the heart of drug development, security can no longer be a siloed function. It must be an integral part of AI strategy, product development, and organisational culture.
The opportunity ahead is enormous. But so is the responsibility. By prioritising security from the beginning, we not only protect sensitive data and meet compliance obligations, but we also lay the groundwork for meaningful, scalable, and ethical innovation in one of the world’s most critical industries.
About the author
Siwar El Assad is a cybersecurity leader dedicated to leveraging technology for societal impact. Currently serving as the chief information security officer (CISO) at Quant Health, she drives cybersecurity initiatives that align with organisational goals and global regulatory standards. She also serves on the advisory board of a security start-up, where she plays a critical role in shaping next-generation data protection solutions that balance robust security with seamless user experience. Her career kicked off at 18, when she landed a role at Check Point after successfully cracking a series of online cybersecurity challenges the company had labelled as nearly unsolvable.