2026: When clinical intelligence moves from promise to practice

Digital
AI in clinic

In 2012, I sat in a hospital parking lot while my mother recounted a nightmare. My father had just undergone open heart surgery to replace a bicuspid aortic valve – a "technically successful" operation. That success was immediately eclipsed by a stroke on the operating table and the three-year recovery that followed.

Eight years later, in 2020, a tooth cleaning resulted in a staph infection that left him septic. Our family was flying blind again, jumping for joy over a clean echocardiogram because it was the only data point we had access to that week that gave us any hope.

I think about my father's journey every time I show up for work. For two decades, the healthcare and life sciences industries have promised that "digital transformation" would help prevent chronic disease and catch illness before it strikes. It hasn't – not yet at least.

Looking back, I see now that we've been building the foundation. We've spent twenty years achieving administrative digitisation: billing codes, cloud computing, electronic health records. We built the nervous system. Now, in 2026, we're finally ready to give it a brain.

We are about to enter a new era of clinical intelligence. The era of "digital" as a separate activity in healthcare – something you do at a screen – will begin to dissolve in 2026. The infrastructure is finally built. Now, the intelligence will begin to flow through it.

In the next twelve months, I expect meaningful acceleration across three AI-driven shifts that will define this next era. When I imagine the future of healthcare, I see it in terms of what it would have meant for my father and our family. While I’m cautiously optimistic, I want to separate out what I expect to see in 2026 specifically, because each shift raises different questions: about authority and liability, infrastructure and privacy, and scarcity and value.

Shift 1: From automation to authority

For the last three years, the world has obsessed over generative AI that can write and create. By the end of 2026, we'll see agents granted genuine authority to act across the healthcare ecosystem: approving routine requests, routing patients, allocating resources, scheduling interventions.

This will mark a major shift from automation to authority, with AI making business decisions autonomously – taking action in operational workflows without waiting for human approval.

In 2012, after my father's stroke, the healthcare system struggled to support us operationally. Discharge planning was chaotic. Rehabilitation facility placement took days of phone calls. Insurance pre-authorisations for post-acute care created gaps in his recovery timeline. Home health coordination fell through the cracks.

An autonomous agent managing his post-operative care coordination could have changed everything – not by observing his clinical status, but by acting on it. The moment his surgical team documented "stroke complication", an agent could have automatically triggered a care coordination protocol: pre-authorising rehabilitation services, scheduling neurology follow-up, alerting the case manager, and initiating family communication – all before anyone had to make a phone call.

But the risks are already here too. A landmark 2019 study found that a widely used algorithm for allocating care resources systematically undervalued Black patients' healthcare needs – because it used cost as a proxy for need, and Black patients historically received less care even when equally sick. More recently, AI tools used for prior authorisation have been accused of producing claim denial rates 16 times higher than human reviewers. In 2026, as agents gain greater authority to act – not just flag – these problems will intensify.

The regulatory response is already taking shape. Six states have already passed laws limiting insurers' use of AI to deny coverage. When the inevitable "hard lesson" moments hit in 2026, lawmakers and regulators will be forced to define clearer boundaries between human judgement and machine authority in healthcare operations.

Shift 2: The rise of ambient observation

In 2020, my father walked into his doctor's office weeks before his sepsis diagnosis, feeling "a little off." He couldn't articulate what was wrong. Neither could anyone else. He was sent home.

Imagine if that visit had gone differently. If the exam table had detected subtle changes in his weight distribution and posture. If ambient sensors had picked up an elevated respiratory rate from his breathing patterns. If his Apple Watch data, showing a week of declining heart rate variability, had been pulled into the same system. No single signal would have been conclusive. But together? The convergence might have triggered: "Recommend priority triage – converging indicators suggest early systemic inflammation." He sees the doctor first, and the doctor knows what to focus on – not because he complained the loudest, but because the building knew.

This isn't science fiction. An October 2025 systematic review in BMC Infectious Diseases found that machine learning and deep learning models can predict sepsis earlier than traditional diagnostic methods, though performance depends on data quality. The question for 2026 won't be whether the algorithms work. It will be whether we build the sensor networks to feed them.

If the first shift in clinical intelligence is about AI acting on business decisions, this second shift is about AI watching the world and prescribing clinical interventions based on its predictions. This is the sensing and thinking layer – the infrastructure that observes, predicts, and surfaces signals that humans and agents can act upon.

The infrastructure is already taking shape. In 2024, 71% of hospitals reported using predictive AI integrated into their electronic health records, up from 66% in 2023. Of those, 92% use it to forecast health trajectories for inpatients.

2026 will be the year we begin building out the physical sensor infrastructure to feed these models with continuous, real-world data. Millions of patients already wear clinical-grade sensors on their wrists and fingers. What will change this year won't be the wearable – it will be the environment. Ambient sensors will begin deploying at scale throughout the healthcare ecosystem: in doctors' offices, hospital rooms, and assisted living facilities.

The price of this safety net? Privacy. Will we consent to ambient monitoring of our bodies in public spaces? History suggests yes. We traded our browsing habits for free search. Our social graphs for free networking. Our location data for free maps. When the trade becomes "your biometric data for earlier cancer detection," the calculus won't even be close. The privacy debates of the 2020s will seem quaint when the alternative is "we could have caught your father's infection before he went septic." The "by entering this office you consent to AI monitoring for clinical purposes" form will become the new HIPAA disclosure – signed without reading at every doctor's visit, because the alternative is unthinkable. We're inviting HAL 9000 into the exam room and, this time, the mission is us.

Shift 3: The human premium

Most of healthcare is routine. Strep throat diagnosis. Prescription refills. Stable chronic disease check-ins. Straightforward dermatology screenings. AI will handle it – and for these high-volume, low-complexity consultations, AI may be just as good. Cheaper and quicker too.

In 2026, I expect we'll see health systems begin piloting AI-primary consultations: chatbot triage, algorithm-driven diagnosis, and telehealth physician supervision for routine cases. We've already watched this happen in mental healthcare – the BetterHelp-ification of therapy delivers care at scale, speed, and reasonable cost. This year, that model will begin rippling through primary care, dermatology, and routine specialties.

But not every moment in healthcare is routine. And not every moment is about information.

After my father's stroke, we had a care team that showed up as humans, not just providers. They helped us navigate impossible decisions – not by giving us more data, but by bearing the weight of uncertainty alongside us. That made all the difference.

That's the human premium: not information synthesis, but presence, advocacy, and the willingness to share the burden of decisions that have no perfect answer.

The three-lane framework of clinical intelligence isn't about AI versus humans. It's about matching capability to complexity. Agentic autonomy and ambient observation will handle the operational and informational layers so that humans can focus on what actually requires human presence. AI taking on routine care won't diminish the human premium; it will clarify it. It will free human attention for the moments where being human is the point.

What this means for life sciences

The three lanes above describe a fundamental re-segmentation of healthcare. But for pharmaceutical and life sciences companies, the question is sharper: Where do we fit?

Agentic AI in market access and patient services: Payers are already deploying AI to deny claims at scale. Life sciences companies will need to respond in kind, building their own agentic capabilities in market access and patient services to counteract algorithmic denials, advocate for patients, and ensure access to therapy. The companies that build intelligent, responsive patient support systems – agents that can navigate payer algorithms, flag wrongful denials, and escalate appropriately – will protect both patients and revenue. But governance matters: the same discriminatory risks that plague payer AI will apply to any agent a pharma company deploys. The companies that succeed won't be the ones who deploy agents fastest – they’ll be the ones who build the oversight frameworks that prevent their own "hard lesson" moment.

The infrastructure for real-world evidence: We've been promising decentralised clinical trials for a decade. Readers are right to be sceptical. What will change in 2026 won't be the promise; it will be the infrastructure catching up. Wearables are finally ubiquitous. The regulatory frameworks (FDA's 2024 guidance on digital endpoints) are in place. And the AI to synthesise continuous passive data into clinically meaningful endpoints has finally matured. The "snapshot problem" – capturing patient data only at scheduled site visits – will begin giving way to continuous, objective measurement. Pharmacovigilance will transform from episodic reporting to real-time signal detection. The companies that build ambient data partnerships with health systems now will have an insurmountable advantage in evidence generation by 2028.

The strategic value of the field force: Just like their patients, when physicians are drowning in AI-generated summaries and ambient data streams, they won't need someone to recite a detail aid. They'll need a human partner who can work alongside AI to synthesise and translate data into clinical meaning. Medical Science Liaisons (MSLs) and sales representatives will continue to become increasingly essential in the age of clinical intelligence, but only if equipped with the predictive insights and freed from the administrative burden that currently consumes their days.

The companies that win in 2026 won't be the ones with the most AI pilots. They'll be the ones who understand which lane each function belongs in, and invest accordingly.

Into practice

In 2026, we'll begin to see the disappearance of AI as a separate thing in healthcare. We'll stop "using AI" the way we once "went online". Intelligence will become increasingly ambient, embedded, invisible – woven into the exam room, the EHR, the conversation.

The three lanes represent the beginning of a fundamental shift toward clinical intelligence in developed healthcare economies. Agents will handle the decisions that can be systematised. Ambient systems will watch what we cannot. And humans will do what only humans can do: connect, empathise, and exercise judgement in situations that defy algorithms.

I think about my father: the stress of post-stroke discharge planning, the missed staph diagnosis, but also the neurologist who showed up as a human when we needed them most. These shifts could eliminate the chaos, catch the sickness before it spreads, and make that human presence the rule, not the exception. That’s the future of healthcare I want for every family.

About the author

Tom Barry is an AI product owner at Novartis, where he delivers AI for biopharma commercialisation, including field force optimisation, predictive analytics, and customer engagement. He brings more than a decade of healthcare and life sciences experience, including five years leading enterprise digital transformation at Takeda. Barry is also co-host and executive producer of The Reality Gap, a podcast about the disconnect between professional expectations and workplace reality. All views expressed are his own.

Image
Tom Barry updated
profile mask
Tom Barry