AI's promise, biotech's peril: How unequal access to computing power threatens medical breakthroughs
The biotech industry stands at a fork in the road. One way leads to a future in which AI-driven breakthroughs remain the exclusive domain of pharmaceutical giants with billion-dollar budgets. The other, to a landscape where breakthroughs regularly emerge from start-ups, academic labs, and research teams from San Francisco to Singapore, and everywhere in between. The path we choose will determine not just market winners, but the pace and breadth of medical innovation for decades ahead.
As we enter what some are calling "Biotech 3.0" – an era defined by artificial intelligence (AI) and computational biology – the stakes couldn't be higher. Companies that harness AI effectively will compress drug development from a decade to years, identify novel targets for untreatable diseases, and deliver personalised therapies at scale. Those that don't will be stuck in the slow lane and will soon become left behind or irrelevant. Sadly, they will take their potentially transformative science – and their potential cures – with them.
Access all areas – or access denied?
AI infrastructure – the hardware and software components required to build, train, deploy, and maintain AI models and applications – has become biopharma's ace up the sleeve. But this isn’t a transformation that only the sector’s superpowers can win. The same computational power that enables AlphaFold to predict protein structures, or allows Converge Bio to analyse 36 million cells for patient-level insights, is now theoretically within reach for anyone. Access, however, remains frustratingly unequal.
While companies like Insilico Medicine have been able to identify novel drug targets and bring them to IND-enabling studies in under 18 months for less than $2.6 million – a process that traditionally takes years and tens of millions – most smaller biotechs still struggle to access the basic computational resources needed to analyse their data effectively.
Other tantalising examples of success include the following: mid-sized player Recursion Pharmaceuticals has built an AI-driven platform that screens millions of biological perturbations, attracting partnerships with Bayer and Roche. Absci is using generative AI to design antibodies from scratch, bypasses years of trial-and-error. And Atomwise is now delivering its virtual screening capabilities, which once required expensive supercomputing clusters, to over 250 collaborators, including many small and academic groups, via cloud-based AI.
These should not be isolated victories, but glimpses of what becomes possible when computational barriers fall. Yet, for every Recursion or Absci, many more promising biotechs remain locked out of the AI revolution by a combination of cost, complexity, and lack of access to state-of-the-art infrastructure.
The hidden barriers favouring giants
Beyond raw computational power, several systemic barriers continue to favour larger organisations, threatening to concentrate innovation in fewer hands.
Data monopolies remain perhaps the most insidious challenge. Large pharma companies sit atop decades of experimental data, electronic health records, and annotated omics datasets – a treasure trove that smaller companies can't match. When AI models are only as good as their training data, this advantage becomes nearly insurmountable.
Infrastructure complexity presents another hurdle. While cloud services have lowered entry costs for basic applications, advanced use cases – massive molecular simulations, multi-modal model training, or integrated lab automation – still require expensive, specialised set-ups that smaller firms struggle to build and maintain.
Regulatory uncertainty adds another layer of complexity. Navigating frameworks for AI-based drug development, from synthetic data usage to model-based regulatory submissions, requires specialised legal and compliance expertise that large companies employ in-house, but smaller biotechs can rarely afford.
The talent war further tilts the playing field. Cross-functional teams that deeply understand both life sciences and machine learning remain scarce and expensive. Big pharma can afford to hire teams of biologists, data scientists, and ML engineers; start-ups often rely on external consultants or skeleton crews.
A fork in the road
The consequences of failing to address these disparities extend far beyond business competition. If smaller biotechs cannot adopt AI effectively, they face strategic irrelevance in a field increasingly driven by computational discovery. They'll struggle to compete on speed, cost, and data insights. Investment will flow elsewhere. Most critically, they risk scientific obsolescence as the gap between AI-enabled and traditional discovery widens into a chasm.
This concentration of capability doesn't just threaten business models – it threatens lives. History shows that breakthrough therapies often emerge from unexpected sources. Limiting AI access to a handful of large players would limit the diversity of approaches, targets, and solutions. In a world grappling with antibiotic resistance, rare diseases affecting small populations, the complexities of ageing and potentially new pandemics, we cannot afford such constraints.
The path forward: Infrastructure as a public good
The ideal way forward would systematically dismantle these barriers, which put smaller players at a disadvantage, and treat AI infrastructure not as a luxury, but as essential research infrastructure – like microscopes or mass spectrometers.
First, we must solve the computational roadblock. Life sciences need purpose-built cloud platforms offering scalable, affordable access to graphics processing units (GPUs) – the specialised chips originally designed for gaming that now power AI calculations at speeds impossible with traditional processors. The market is likely to solve this part of the problem; my company, Nebius, is one of several aiming to do this.
We and others understand that these platforms must provide not just raw computing power, but the complete infrastructure biotech teams need: automated workflows for drug discovery, pre-trained models for genomics analysis, and cloud-native tools that handle the complex data processing required for everything from protein folding simulations to patient response predictions. When a start-up can access this full stack of AI infrastructure without building their own data centre or managing complex technical systems, David can finally compete with Goliath.
Second, regulatory innovation must keep pace with technological innovation. Initiatives like the US FDA's Model-Informed Drug Development pilots point the way forward. We need modular, accessible compliance frameworks that smaller players can navigate without armies of lawyers. Regulatory sandboxes for AI applications, clear guidance on synthetic data usage, and streamlined pathways for computational evidence would do much to level the playing field. But equally important is enabling the global flow of ideas, models, and datasets across borders while respecting local rules. Healthcare AI’s greatest bottleneck isn’t just compliance, it’s fragmentation. Building neutral, high-performance environments where teams can run complex, multi-country research without reinventing the wheel each time will be critical to making healthcare innovation a truly global enterprise.
Finally, we need new models of collaboration. Big pharma increasingly relies on smaller biotechs as specialised consultants for AI projects, recognising their deep expertise in specific scientific domains. This points toward a better future model: collaborative ecosystems that distribute both resources and expertise. The biotech-pharma partnership model is already evolving beyond traditional licensing deals. J&J's JLABS supports over 750 companies with 'no strings attached' laboratory access, while Pfizer's Breakthrough Growth Initiative commits $500 million to clinical-stage biotechs. Novartis has made available clinical trial data covering more than two million patients through its Biome platform. These models show how partnerships can provide infrastructure and data access without requiring acquisition or equity stakes – enabling smaller biotechs to maintain independence while accessing enterprise-scale resources.
A call to action
The message for policymakers, investors, and industry leaders is clear: ensuring equitable AI access isn't about fairness – it's about maximising our collective shot at breakthrough therapies. Every barrier we maintain between a promising research team and the computational resources they need, is a potential cure delayed or lost.
For investors, this means recognising that the highest returns may come not from backing companies with the biggest computational resources, but from enabling those with the best ideas to access such resources. Family offices and impact investors, in particular, have an opportunity to reshape the landscape by prioritising infrastructure access alongside traditional drug development.
For policymakers, this means extending existing infrastructure-sharing models to computational resources. Just as the NIH's Molecular Libraries Program provides researchers access to chemical compounds they couldn't afford individually, and NSF BioFoundries offers free access to specialised biotech facilities, we need similar programs for AI compute infrastructure. The UK's £1 billion AI Opportunities Action Plan and Germany's National High Performance Computing Alliance show governments beginning to apply this same logic to computational resources – but the scale must match the AI revolution's demands.
For industry leaders, the path forward is clear: expand resource-sharing programmes that have already proven successful. JLABS' model has achieved an 88% company survival rate while generating $58 billion in deals. The Quebec Consortium for Drug Discovery leverages $645 million across nine pharma companies and 120+ institutions. These aren't acts of charity – they're strategic investments that have launched 100+ start-ups, created novel therapies, and strengthened the ecosystem.
The clock is ticking
Choices made now about AI accessibility will echo for decades. We can build a future where innovation flourishes wherever talent meets opportunity – or we can sleepwalk into a situation where limited access to these vital tools stymies progress, and outdated regulatory structures hold back the entire ecosystem.
Making AI accessible is about fulfilling our industry's promise to develop the best treatments for those who need them – regardless of which logo appears on the laboratory door.
About the author
Dr Ilya Burkov leads global healthcare and life sciences growth at Nebius, where he helps biotech teams leverage AI to accelerate drug discovery. With hands-on experience from Addenbrooke’s Hospital in Cambridge, UK and over 15 years in healthcare and life sciences, including a solid background at AWS, he previously worked in several healthcare start-ups across the UK, Europe, and Asia.
