Implementing generative AI: A guide for commercial life sciences teams
More than 70% of generative artificial intelligence (GenAI) experiments fail to make it to production.1 Is this number higher than you thought? Let’s explore why.
With bold claims flooding the market, it can be extremely challenging for business leaders to differentiate between technological development and readiness for implementation.
In an industry saturated with articles extolling the transformative potential of generative AI, we take a different approach: examining where it works, where it doesn’t, and how to tell the difference.
Understanding the GenAI implementation gap
The distinction between experiment and production lies at the core of implementation challenges. Experiments demonstrate capabilities in a controlled environment. Production systems, however, must perform consistently across diverse data sets, users, languages, and technologies – considerations frequently overlooked during experimental phases.
Quality and consistency present formidable obstacles. Creating a single high-quality output is straightforward, but ensuring consistent quality at scale is more difficult. Approaches that are effective in controlled environments often fail in production due to real-world variables.
Another gap emerges when evaluating use cases. While it’s tempting to prioritise the problems that seem most interesting, this is inadvisable if they rarely come up. Instead, what delivers the greatest long-term value is focusing on high-frequency tasks. Specifically, repetitive, manual tasks can offer high-impact opportunities. Generative AI excels at automating text analysis and, increasingly, at generating text data – including code, schema, or payloads that are needed repeatedly.
Additionally, implementations fall short when they don’t integrate into existing workflows. Adoption rates decline when tools require switching applications, regardless of the technology’s inherent capabilities.
Prioritisation framework for success
Successful implementation requires a structured evaluation of potential use cases. A prioritisation matrix rating each project for impact and effort should be complemented by a risk assessment. Categories such as business impact should reflect revenue generation and cost savings and also include measures like customer satisfaction or user adoption.
In our experience, cost, efficiency, and speed improvements represent the most significant benefits for life sciences teams.
When evaluating costs, consider assessing four key factors:
- The complexity of data and product engineering: interfaces, workflows, data sources.
- Change management requirements: scaling with user numbers and training needs.
- Demands of computing: users, query length, model choice.
- Handling of personal data: privacy and security measures.
To minimise risks, ensure outputs undergo human review for consequential decisions. Create high-quality user experiences by integrating generative AI services into existing platforms, rather than deploying standalone tools. Finally, prioritise non-expert users first, as experts typically have a lower tolerance for limitations in new technologies.2
High-impact use cases
Our experience indicates the most successful generative AI implementations target three primary areas:
- Repetitive, manual tasks: Processes occurring with high frequency usually justify an automation investment. Consider both company-specific workflows and industry-wide repetitive tasks. While an individual pharmaceutical company may handle only a few regulatory filings, specialised providers can leverage industry-wide scale to develop tools that automate portions of these processes.
- Large volumes of text: Generative AI delivers transformative capabilities for text analysis compared to older, natural language processing techniques. This is especially valuable for hard-to-mine text repositories. Most organisations historically have centralised numerical data while leaving text data scattered. This presents an opportunity gap that generative AI can easily fill.
- Code generation: Generative AI significantly accelerates coding tasks, including the development of advanced analytics. The technology has demonstrated remarkable capabilities in code assistance, making it increasingly valuable for life sciences applications that require custom analytics.
Generative AI rarely uncovers novel insights and patterns in data sets well known to human analysts. Instead, it excels at making information more accessible and processing it faster. Being realistic about this helps organisations avoid pursuing use cases that are destined to disappoint.
Best practices for implementation
Implementation success hinges on user experience, including convenience, business value, and reliability. When tools fit in existing workflows and provide immediate value, adoption is easier.
Setting proper expectations is crucial, particularly regarding capabilities and limitations. Users familiar with web search interfaces often expect a scope of information that is unlimited, but implementations remain bounded by available organisational data. Establishing this context builds appropriate trust.
Validation processes must be robust and ongoing. Unlike traditional programs with predictable error patterns, generative AI errors often manifest unpredictably. This necessitates comprehensive review processes across outputs and contexts.
As the field matures, integrated solutions will increasingly outperform standalone applications. Generative AI capabilities will become embedded features within established platforms, rather than separate tools that require additional user attention.
The next wave: Agentic AI
Agentic AI – empowered to perform tasks autonomously – is the next evolution of AI technology. This approach enhances automation using a traditional robotic process. However, it is fragile when implemented through rule systems that don’t account for unforeseen scenarios.
Generative AI models can now resolve steps within decision trees that previously derailed automation, unlocking potential for end-to-end process management. This capability is especially powerful when embedded within platforms that provide context, data access, and execution environments.
Measured expectations remain essential. Despite its capabilities, agentic AI won’t solve every use case or generate insights from underlying data. The technology supports expert-driven workflows, rather than replacing strategic thinking and domain knowledge.
Future developments and technology evolution
Generative AI technology is evolving through three distinct phases:
- Initially, companies create custom solutions where no industry standards exist – as was the case with early personal computers with manufacturer-specific components.
- This gives way to modular approaches with interchangeable parts and industry-defined standards.
- Finally, components converge into integrated offerings that reduce the complexity for end users and trade customisation for convenience.
As part of this evolution, small language models (SLMs) deliver “good enough” quality with substantial improvements in speed and reduced operating costs. Integration frameworks streamline development, while industry standards drive interoperability and compliance.
The future value proposition centres on platform-integrated AI. Deep integration with existing tools allows generative AI to enhance workflows without requiring users to learn entirely new interfaces. This is the key to both adoption and sustained impact.
Balancing innovation with reality
Successful generative AI implementation in life sciences demands careful prioritisation and setting realistic expectations. By targeting applications that combine high business impact with manageable cost and risk, organisations can increase the success rate of their experiments.
For commercial teams, generative AI’s promise remains compelling when implementation focuses on appropriate use cases. Through strategic integration into business processes, companies can drive innovation, enhance patient outcomes, and achieve operational excellence, but only by acknowledging the conditions where the technology works and where it doesn’t.
In a market saturated with articles highlighting the potential of generative AI, it’s crucial to also understand its limitations. This clarity can deliver more scalable and valuable results than experiments that are exciting, but unlikely to reach production.
References:
- Now decides next: Generating a new future. Deloitte’s state of generative AI in the enterprise: Quarter 4 report. 2025 Jan.
- Rink C, O’Reilly W. Where to deploy generative AI solutions in life sciences. IQVIA White Paper. 2025 Mar 19.
About the Authors
Charles Rink is senior principal of information management & analytics technology at IQVIA. He has over 20 years of experience in strategy consulting, technology, and analytics for the life sciences industry globally. Based in London, he brings extensive experience advising clients in commercial strategy, analytics automation, and omnichannel operations. He holds a BSc degree in Molecular Biochemistry and Biophysics and Economics from Yale University.
William O’Reilly is director of product offering development at IQVIA. He leads the generative AI and machine learning product capabilities across orchestrated analytics, including the IQVIA AI Assistant. With a diverse background as a clinical doctor, healthcare data analysis consultant, and data product manager, O’Reilly leverages his varied expertise to bring a unique perspective and create innovative solutions in life sciences technology products and services.
