Building an AI-ready workforce in life sciences

Pharma companies are investing millions into developing and integrating AI platforms, yet, despite their promise, many platforms remain underused simply because the people expected to work with them lack the confidence to start. For those in the sector, this is more than a missed opportunity. It signals a shift in what kinds of skills and roles will define the future of work.

Eularis co-founder and CEO Dr Andree Bates has seen the pattern repeat itself. “Everyone […] is familiar with ChatGPT and things like that, but they’re not necessarily using it effectively. They’re using it for correcting emails or brainstorming […] But you can use it for so much more in your workflow.”

Bridging this gap between casual use and confident application is already shaping the kinds of professionals that organisations need most.

For individuals, this gap also represents opportunity. New roles are taking shape around hybrid skills – data science paired with microbiology, psychology with cybersecurity, law with ethics. Understanding how these combinations are emerging is the first step to navigating the new normal.

Literacy as the new baseline

library books
The most valuable skills in the age of AI are not limited to advanced coding or complex mathematics. What matters most, at least for the majority of professionals in life sciences, is fluency: knowing how to use AI effectively, understanding its limitations, and spotting when its outputs need to be challenged. This standard is the baseline for anyone hoping to work alongside these systems with confidence.

Bates calls this “AI literacy” – a baseline capability that should sit alongside digital or data literacy. She also stresses the importance of data fluency. As algorithms become more tightly integrated into clinical, regulatory, and commercial tasks, experts need to be able to assess the quality of the inputs. That means recognising bias, interpreting results, and flagging gaps in evidence.

Practical interaction is another core skill. Prompting well is not about tricking a system, Bates explains, but about communicating clearly. “Imagine you’ve got a genius-level intern who’s never done it before. How would you explain it to them? They’re going to learn really fast. But you’ve got to explain it step by step,” she says.

Through her work training industry professionals, she has seen how much difference this clarity can make. “They were blown away that I gave [the AI tool] like 10 instructions […] and it then did each sequentially and figured out how to do it all.”

Spotting the roles of the future

workforce in discussion
Alongside baseline literacy, AI is reshaping the more specialised ends of the workforce. Some roles that sound novel are simply evolutions of existing ones. Algorithm auditors, for example, are essentially data scientists tasked with monitoring model performance and preventing degradation. Others, however, reflect the new intersections AI has created between disciplines.

Bates points to empathy trainers as one example. “Because the AI without empathy training can be a little hostile, perhaps […] abrupt,” she explains. Training models to communicate in a way that feels human requires people with backgrounds in psychology or linguistics, paired with data science. Similarly, ethics specialists are emerging who combine legal or philosophical training with technical fluency to guide decisions on fairness, bias, and transparency.

Security roles are also diversifying. “I think we’re going to need security specialists that have psychology backgrounds […] The biggest flaw in security is not the tech side, it’s the human side,” Bates says. With deepfake scams and AI-driven fraud growing more sophisticated, pharma companies will need expertise that bridges technology and human behaviour. “It’s like this arms race – the bad actors are constantly upping their AI. So [defenders have] got to constantly up their AI.”

At the more technical frontier, demand is growing for what Bates calls “double PhDs.” Data scientists with deep expertise in genetics, microbiology, or systems biology are increasingly necessary to ensure algorithms can handle highly specific research problems. “One isn’t enough,” she says, reflecting on recent hires who hold doctorates in both data science and a biological field.

These emerging roles highlight a broader truth: future-ready teams will not be built solely on technical brilliance, but on unusual combinations of expertise that allow AI to be applied responsibly and effectively across life sciences.

Keeping skills alive in a fast-moving field

fast moving visual
AI does not stand still, and neither can the people working with it. Strategies, tools, and best practices that feel cutting-edge today will look outdated within a year or two. Yet, for all the talk about AI readiness, too many organisations treat training as an afterthought. Technology is rolled out across departments with a short briefing or an FAQ document, and employees are left to figure it out themselves. The result is predictable: low adoption, misplaced scepticism, and wasted investment.

Bates recalls working with a top ten pharma company where staff had access to an AI platform designed to support multiple functions. “Everyone I interviewed […] it’s never used,” she says. “They don’t, they’re scared of even going into it and, you know, there’s a lot of great stuff in there. But I said, have you been trained on it? And they said, no one’s trained us on it. They just released it with some information.”

Even when people do try, outcomes often disappoint — but not always for the reasons they expect. “They all said to me, oh, but we’ve tried AI, it just doesn’t come up with anything as good as us. And I was like, probably not, but let’s have a look,” Bates explains. When she reviewed their prompts, the issue wasn’t the tool, but the way it had been instructed. Once she showed the group how to prompt more effectively, the results improved dramatically.

“It just demonstrated to them that you need to understand how to interact with the AI tools, even the basic tools,” she says. “If you write the prompts in a particular way, you will get much better output than if you don’t.”

The most successful companies are treating workforce readiness as a continuous process, rather than a one-off initiative. Some are establishing internal AI academies that bring together technical training for data scientists with broader literacy programmes for commercial and scientific staff. Others are experimenting with mentorship models, pairing domain experts with AI specialists so that each can learn from the other. The aim is not only to build confidence, but to create a culture where experimenting with new tools is rewarded, not resisted.

“Even your strategy has to evolve as you go,” Bates says. Regular exposure to what’s new is vital. “We do mentoring of AI, giving updates every month […] update sessions of what the latest things [are].” These check-ins help employees avoid feeling left behind while keeping the organisation aligned on opportunities and risks.

Crucially, skills development is being linked to performance and career progression. Companies that integrate AI competence into promotion criteria and reward structures signal that this is not optional – it is part of what it means to succeed.

Preparing people, not just platforms

workforce preparation
Any major shift in the workplace is likely to elicit anxiety around job security, but for Bates, when it comes to AI-enabled working, focusing on new expectations, rather than job losses, can help professionals understand how to blend technical fluency with domain expertise, and organisations to treat skills development as a core investment, rather than a side project.

For companies, the priority is clear. As Bates explains, training cannot be a one-off exercise or a generic webinar. It needs to be continuous, practical, and tied to performance. AI academies, mentorship programmes, and regular updates keep teams aligned with rapid advances. Change management must be built in from the start, ensuring employees see value in new tools, rather than fearing them.

For individuals, the opportunity lies in getting ahead of the curve. Building AI literacy, experimenting with prompt-writing, and strengthening data fluency are low-barrier ways to become more confident. Ethics and communication skills matter just as much, shaping how effectively AI can be used in regulated, high-stakes environments.

As Bates puts it, success comes when AI skill development is seen as a core competency, not a training initiative. Those who treat adaptability as part of their professional identity will not only keep pace with change – they will shape what the future of life sciences work looks like.

Dr Andree Bates

About the interviewee

Dr Andrée Bates is a leading Artificial Intelligence thought leader and the founder and CEO of Eularis, a firm focused on tackling pharma industry challenges with artificial intelligence since 2003.

Part of her work focuses on the initial strategic planning of AI across healthcare companies’ value chains to ensure that pharma leaders can leverage its value and use it to effectively solve their unique growth challenges successfully. She is also a guest lecturer in 6 University MBA programmes in Health Innovation and AI.

About the author

Eloise McLennan is the editor for pharmaphorum’s Deep Dive magazine. She has been a journalist and editor in the healthcare field for more than five years and has worked at several leading publications in the UK.

Sign up

Supercharge your pharma insights: Sign up to pharmaphorum's newsletter for daily updates, weekly roundups, and in-depth analysis across all industry sectors.

Click on either of the images below for more articles from this edition of Deep Dive: AI 2025