'Trust gap' is holding back AI in healthcare

News
'Trust gap' is holding back AI in healthcare

The potential for artificial intelligence to help tackle issues like staff shortages, skyrocketing costs, and systemic inefficiencies in healthcare systems is being held back by patient scepticism and physician concerns about bias and liability.

That is the top-line message from the latest edition of the Future Health Report (PDF) published by Dutch tech giant Philips, which is examining how AI could empower healthcare professionals (HCPs) to "deliver better care for more people."

The report – based on surveys of HCPs and patients across 16 countries – paints a picture of health systems that are creaking under the strain of increasing demands, with the data showing that patients can wait almost two months or more for specialist care in over half the countries surveyed, with the average global wait time to see a specialist now standing at 70 days.

Meanwhile, nearly a third of patients reported their health deteriorated because they couldn't see a doctor or specialist in time, and more than one in four ended up in hospital as a result of delays.

At the same time, HCP respondents reported that slow adoption of AI is contributing to missed early interventions (46%), worsening burnout (46%), and deepening patient backlogs (42%).

"AI has emerged as a powerful accelerator – and perhaps our most compelling opportunity – to meet rising healthcare demands as populations age," according to the report, which suggests it could not only transform administrative tasks but also diagnose diseases more precisely, reduce avoidable hospital readmissions, and improve patient outcomes.

However, trust gaps among clinicians and patients are threatening to slow the adoption and impact of AI. Among HCPs, two-thirds said they were optimistic about its potential to improve patient outcomes, but less than half (48%) of patients felt it would improve healthcare overall.

All told, only 77% of patients feel comfortable with AI in treatment, and 83% in diagnosis, with trust significantly higher for back-office uses like admin or scheduling.

Meanwhile, a high proportion of doctors (85%) said they were unsure about the legal liability issues that may follow the use of AI to assist healthcare decisions. Data bias was cited as another major worry, on the premise that it could deepen healthcare disparities if left unaddressed.

"To realise the full potential of AI, regulatory frameworks must evolve to balance rapid innovation with robust safeguards to ensure patient safety and foster trust among clinicians," said Shez Partovi, Philips' chief innovation officer.

"By 2030, AI could transform healthcare by automating administrative tasks, potentially doubling patient capacity as AI agents assist, learn, and adapt alongside clinicians," he added.

"To that end, we must design AI with people at the centre – built in collaboration with clinicians, focused on safety, fairness, and representation – to earn trust and deliver real impact in patient care."