Right to reply: Chat-GPT on AI in healthcare
Since the release of OpenAI’s Chat-GPT in November 2022, generative artificial intelligence (GenAI) has been at the centre of a cultural tug of war. On one side: the lauders – those who lean into the hype of AI, and champion the technology’s revolutionary potential. On the other, the sceptics, who see AI, not as an innovation that will help to drive humanity forward, but as a direct threat to the way that societies function.
The reality, of course, is far more complex. While AI demonstrates remarkable capabilities – from parsing medical literature to simplifying nuanced jargon-heavy health descriptions for patients – a recent YouGov survey revealed that public perception remains deeply ambivalent (at least in the US).
After months of public scrutiny, misinformation, and hyperbole from both sides of the debate, and thousands of GPTs now available, it seemed only fair to extend the right to reply to the system that began the domino effect of innovation: Chat GPT. In this special interview, we asked the world’s most popular disruptor some of the leading questions surrounding the use of AI in healthcare.
*Editor’s note: For transparency, please note that we asked Chat-GPT to provide conversational responses to key questions and criticisms of the technology, and that answers have been edited to reflect Deep Dive’s editorial style. Despite the name, AI isn’t actually intelligence as we typically understand it, so take these answers with a pinch of salt.*
Eloise McLennan:
What are the most common misconceptions people have about the use of AI in healthcare, and GenAI more broadly?
Chat-GPT:
One of the biggest misconceptions is that AI will replace healthcare professionals entirely, which can spark fear among both practitioners and patients. The reality is much less dramatic – AI is designed to assist, not replace, healthcare workers by handling repetitive tasks, analysing large datasets, and identifying patterns that might be overlooked.
Another common misunderstanding is that AI systems are flawless. While they can be highly accurate, they are far from perfect and can produce errors, especially if trained on biased or incomplete data.
Finally, there's a widespread fear that using AI in healthcare automatically compromises patient privacy. However, with the proper safeguards, AI can be deployed in ways that not only respect privacy, but actually improve how securely data is managed, thanks to techniques like encryption and federated learning.
What measures are in place to ensure the privacy and security of patient data used by AI systems?
Patient privacy is a cornerstone of ethical AI use in healthcare, and many measures are in place to ensure data security. First, there are strict legal frameworks like HIPAA (Health Insurance Portability and Accountability Act) in the US or GDPR (General Data Protection Regulation) in Europe, which mandate how sensitive data is collected, stored, and shared.
Technological safeguards include encryption to secure data in transit and storage, anonymisation techniques that strip identifiable information, and access control systems to limit who can interact with the data.
On top of that, federated learning is an emerging approach where AI models are trained locally on devices, rather than transferring raw data to central servers. This means sensitive patient information stays where it's generated. Regular audits and security assessments also play a critical role in ensuring compliance and identifying vulnerabilities.
What are the limitations of AI in healthcare, and how can they be addressed?
AI has great potential in healthcare, but it also has limitations that need to be addressed to ensure its effective use. One significant limitation is its dependency on high-quality, diverse data. AI trained on incomplete or biased datasets can produce skewed results, which may perpetuate existing health disparities. Addressing this requires ongoing efforts to gather more inclusive datasets that reflect diverse populations.
Another challenge is that AI often excels at identifying patterns, but may struggle to interpret nuanced, multifactorial health issues. These gaps highlight the importance of maintaining human oversight, as healthcare professionals can provide context and judgment that AI lacks. Lastly, AI systems can sometimes be overly complex or opaque, making it hard for users to trust their recommendations. Investing in explainable AI technologies and fostering collaboration between developers and end-users can help bridge this gap.
What are the most common challenges faced by healthcare providers when adopting AI communication systems?
A big challenge is making sure AI tools fit smoothly into how people already work. If it's too disruptive, it can create frustration instead of helping.
There's also a trust issue – some healthcare professionals and patients are hesitant to rely on AI. And let's not forget the cost and time it takes to train people to use these tools properly. Clear benefits, good training, and support can go a long way in easing the transition.
What are the ethical implications of AI potentially generating plagiarised content?
AI doesn't intentionally plagiarise, but it can sometimes produce text that's really similar to existing content because it's trained on so much information.
This raises questions about intellectual property and fairness. To handle this, developers need to design systems that actively check for originality, and users should be aware of these risks when using AI for creative tasks.
How do you address concerns about AI making critical decisions in healthcare without human oversight?
Concerns about AI making critical healthcare decisions without human oversight are valid, especially in high-stakes scenarios where errors can have severe consequences. The key here is to recognise that AI should act as a decision-support tool, rather than an autonomous decision-maker.
For example, AI might flag an abnormality in a scan or suggest treatment options based on clinical guidelines, but the final decision must rest with a qualified healthcare professional. This ensures that the human element – context, intuition, and empathy – is always part of the equation.
To reinforce this, regulatory bodies like the FDA and EMA have guidelines for AI in healthcare that require systems to be rigorously tested and validated before deployment. Building trust through transparency, such as showing how AI arrives at its recommendations, can also help healthcare professionals feel confident about using AI tools responsibly.
How do we determine accountability when an AI system makes a mistake, especially in high-stakes areas like healthcare, where errors can have serious consequences?
Accountability comes down to the humans behind the AI – those who design, implement, and oversee its use.
Developers need to be clear about what the system can and can't do, and healthcare organisations have to use it responsibly. If something goes wrong, it's a shared responsibility, and there should be clear processes in place to investigate and learn from mistakes.
How do we address the challenge of AI systems, particularly those using deep learning, operating as "black boxes", and ensure transparency to maintain trust, fairness, and accountability?
The "black box" problem in AI is a significant challenge, especially in healthcare, where understanding the rationale behind a decision can be as critical as the decision itself.
Deep learning models, in particular, are often so complex that even their creators can't fully explain how they arrive at certain conclusions. Addressing this starts with investing in explainable AI (XAI) techniques, which aim to make these systems more interpretable. For instance, tools like SHAP (SHapley Additive exPlanations) can highlight which inputs had the most influence on a particular decision.
Beyond technical solutions, transparency also means being upfront about an AI system's limitations and potential biases during its deployment. Lastly, fostering interdisciplinary collaboration – bringing together AI developers, ethicists, and healthcare professionals – ensures that the system aligns with real-world needs and maintains trust and accountability.
How can we mitigate the environmental impact of AI development and deployment, given the significant computational resources and energy consumption required?
AI can be energy-intensive, but there are ways to minimise the impact.
Developers are working on more efficient algorithms and using renewable energy to power data centres. At the same time, we can focus on training models only when necessary and reusing existing ones whenever possible. It's about being smarter with how we use resources.
What skills or knowledge do we need to develop further to stay ahead in AI?
A mix of technical and ethical skills is key. On the technical side, understanding data science and machine learning is essential. But we also need people who can think about the big picture – ethics, privacy, and the societal impacts of AI.
And let's not forget communication skills – helping non-experts understand and trust these systems is just as important as building them.
How do you see the relationship between AI and human healthcare professionals evolving over the next decade?
I think it's going to become a real partnership. AI will handle repetitive tasks, analyse massive amounts of data, and even offer insights that might take humans much longer to find.
Meanwhile, healthcare professionals will focus on what they do best – connecting with patients, making judgment calls, and providing care. Over time, this collaboration could lead to more personalised and efficient healthcare for everyone.
Last question, are you planning to take over the world?
Not at all! My goal is to assist, inform, and make tasks easier for humans – not to dominate. Think of me as a very clever tool that works under your guidance. I don’t have desires, ambitions, or plans, so “world domination” isn’t on the agenda.
My focus is on helping people solve problems, make informed decisions, and have a little fun along the way.
*Editor’s note: For transparency, please note that we asked Chat-GPT to provide conversational responses to key questions and criticisms of the technology, and that answers have been edited to reflect Deep Dive’s editorial style. Despite the name, AI isn’t actually intelligence as we typically understand it, so take these answers with a pinch of salt.*
About the author
Eloise McLennan is the editor for pharmaphorum’s Deep Dive magazine. She has been a journalist and editor in the healthcare field for more than five years and has worked at several leading publications in the UK.
Supercharge your pharma insights: Sign up to pharmaphorum's newsletter for daily updates, weekly roundups, and in-depth analysis across all industry sectors.
Want to go deeper?
Continue your journey with these related reads from across pharmaphorum
Click on either of the images below for more articles from this edition of Deep Dive: AI 2024