Boosting trust in AI: The essential foundation for breakthroughs in healthcare

The use of artificial intelligence (AI) in healthcare is growing rapidly and already providing patients with better care and support. AI-boosted software is being utilised in drug development, symptom and illness prevention, and even early diagnostics like predicting patients at risk of becoming frequent users of emergency services. However, when it comes to complete trust in AI services, patients, healthcare professionals, and the general public still lack faith in the technology’s accuracy and reliability.
AI has the potential to revolutionise healthcare, but not without industry experts addressing specific privacy and security concerns, as well as those ingrained in medical industry knowledge and practices. Mistrust in new technology is highly rooted in concerns regarding the management, ownership, and privacy of patients’ data, as well as cybersecurity threats presented by AI. This extends beyond AI to the systemic inequalities that still need addressing within healthcare.
Without trust as a foundation, the healthcare industry cannot fully explore the potential benefits brought by AI-boosted technology, which ultimately leads patients to miss out on improved treatments and potentially suffer wider consequences. Education is needed by experts who understand the technology and its impact on all stakeholders across the healthcare industry.
The potential to address medical knowledge gaps
Since the boom of AI, its usage has grown exponentially and spread to all areas of work. It has effectively planted its presence into our daily lives. For healthcare, companies now have much bigger datasets and sources that are not only more accessible, but also relevant to the practice. Combining this data with AI applications opens up a world of innovation, rapidly improving healthcare and research.
Organisations are already investing in AI within several use cases, such as diagnostics, drug development, and even within genomics and precision medicine. Its potential in offering more personalised treatments is groundbreaking. Yet, trust in data uses drives industry-wide barriers to its adoption across healthcare.
Where AI can be utilised in accelerating the rate at which existing research gaps are bridged, there are also risks associated with its implementation. AI experts and technology consultants can build greater trust in medical practices for underserved demographics and datasets, for example, but this is only effective if they also acknowledge the inherent risks beyond technology and industry-wide limitations when it comes to medical research.
Data management, ownership, and privacy
As an industry, healthcare collects and stores a significant volume of sensitive patient data that only continues to grow. This is a major source of patient mistrust; people are questioning where their data is saved, and for how long. Who has access to it, and what is it actually being used for? Previous data controversies, such as those seen with Care.data or the General Practice Data for Planning and Research scheme, have shown how a lack of public confidence can significantly inhibit innovation in healthcare.
Elsewhere, digital rights group the Electronic Frontier Foundation (EFF) warned that women using periodic tracking apps must ensure they know how their data is being used in the wake of the Roe v Wade ruling, since some apps shared data with third parties. Where your data ends up is a key concern for public confidence, but as AI-boosted healthcare tools rely on datasets for analysis, addressing that concern is a necessity at these early stages of implementation.
Understanding and engaging in a conversation about AI and the future of care is vital. Management of patient data can be supported by AI models that prevent private and sensitive data from being exposed to the wrong people, and organisations must remain transparent with how they handle data and where AI and other tools play a part in its management and access.
AI as an aid in cybersecurity
To build patients’ trust in new AI-boosted technologies that rely on patient datasets, acknowledging the legitimate reasons for mistrust is key. Data security remains a pressing concern, with many hesitant to simply hand over their personal data, and for good reason. Healthcare is a highly targeted industry, too, with 92% of healthcare organisations experiencing a cyberattack in 2024.
The digitalisation of the NHS only broadens the potential attack surface for cyber-attacks and data breaches, so, organisations must invest in robust cybersecurity measures to uphold patient and staff trust. Healthcare data breaches are a major cause of distress for patients and industry professionals alike. But AI may also hold the answer to safeguarding patient data in an even more robust way.
AI’s ability to analyse databases and recognise patterns is invaluable in cybersecurity applications. It can identify unusual patterns that could indicate a breach, adapt and learn from new threats, and even prevent possible attacks by analysing potential weaknesses and recommending measures to address them. Experts positioning AI as an ally to data protection, rather than a cybersecurity threat, is crucial to inspiring confidence in the technology and its security.
Building trust in the future of healthcare
Understanding that communication and transparency from experts are critical enablers for widespread confidence in AI is the first step. Confidence in AI must be a factor when designing, regulating, and training professionals in the use of AI tools, as well as engaging the public on the resulting trade-offs.
Learning from and addressing key causes of mistrust that prevail in healthcare is imperative, especially when it comes to historically underserved demographics. This is a necessary step before even considering trust in newer technologies. Healthcare industry practices must actively focus on addressing and removing biases from AI datasets; otherwise, AI cannot be presented as a solution to overcoming medical knowledge gaps. Once these foundational steps are taken, its potential for a positive impact on the healthcare industry, however, is huge.
About the author
Tyler Fletcher is the executive vice president of healthcare data, analytics and AI, M&A, and consulting at GlobalData. Before joining GlobalData, Fletcher spent nearly 10 years at Decision Resources Group, part of Clarivate, and is currently a senior advisor at TEAMFund.