AI is having a productive time in pharma and healthcare
Two years ago the use of AI in pharma and healthcare looked to be quickly heading for what Gartner’s Hype Cycle model would term the Plateau of Productivity. After the COVID-powered digital transformations of both pharma and healthcare, there can be little doubt that artificial intelligence is already having a productive time across our sector.
It could even be, as consultants GlobalData predicted back in January, this year’s most disruptive technology across the pharmaceutical industry, though if this promise is to be realised there is much more work to be done about its ethical implications.
FDA approvals
Since 2019, pharma AI approvals by the FDA have included a wearable monitor, two diagnostic alerts for collapsed lungs and cardiac ultrasound software.
And just last week the US regulator cleared the first AI device to help identify colorectal lesions during the millions of colonoscopies that are carried out each year in the country.
Cosmo Pharma’s GI Genius bundles hardware and software that overlays the images from an endoscope camera with green marker squares – accompanied by an alert sound – when the AI detects a potentially abnormal area of the rectum or colon.
The clinician conducting the colonoscopy can then choose to carry out further assessment and they remain the judge of whether a lesion is concerning and what action to take.
The market value of these kinds of disruptive healthcare technologies is striking, with Cosmo Pharma chief executive Alessandro Della Chà suggesting that the worldwide opportunity for AI in colonoscopy alone could be worth “at least $1.1 billion”.
NHS use continues to grow
There are similarly high-level moves towards the adoption of artificial intelligence across the health service in the UK, where NHS innovation arm NHSX receives annual investment of more than £1 billion to lead digital transformation of health and social care in the UK.
Sitting within NHSX is the new NHS AI Lab, launched in 2020 after trials that built on existing work by the NHS’ Digital Pathology and Imaging Artificial Intelligence Centres of Excellence.
The AI Lab aims to support the development and scaling of promising AI health and care solutions, identify where the use of artificial intelligence could be most practical and provide guidance and evidence of good practice to industry and commissioners.
Ensuring good practice, in health as elsewhere, will be key, and the AI Lab last month launched an AI Ethics Initiative. It’s an explicit acknowledgement of the risks of the technology exacerbating existing health inequalities as the adoption of AI across health and care accelerates.
Dr Indra Joshi, director of AI at NHSX, and NHSX’s head of AI research and ethics Brhmie Balaram noted inequalities aren’t inevitable and if they can be proactively addressed the potential of artificial intelligence in health and care can be realised.
They said: “We are inspired by the philosophy of 'Nothing about us without us', which comes from the disability rights movement, and conveys the need to directly involve patients and the public in the process of adopting AI within health and care.
“Our intention is to be patient-centred, inclusive, and impactful as we strive to integrate ethics into the AI life cycle, and we will support projects that can demonstrate these values.”
Ethics: An ongoing dialogue
NHSX is far from the only body to be considering the ethical implications of AI and just this week the European Commission (EC) releasing its final proposal for an Artificial Intelligence Act.
This would create the first legal framework on artificial intelligence, with the EC hoping it would position the region to play a leading role in global use of the technology, and health is one of the “high-impact sectors” where it says action is needed.
It also highlights healthcare as one of the key areas where artificial intelligence could provide competitive business advantages and improved social and environmental outcomes by “improving prediction, optimising operations and resource allocation, and personalising digital solutions available for individuals and organisations”.
The EC’s proposals are likely to raise as many questions as they provide answers, at least for now. But, as the practical possibilities of AI in health are impossible to ignore, the ethical implications of the technology for all stakeholders have to be addressed.
About the author
Dominic Tyer is a journalist and editor specialising in the pharmaceutical and healthcare industries. He is currently pharmaphorum’s interim managing editor and is also creative and editorial director at the company’s specialist healthcare content consultancy pharmaphorum connect. Connect with Dominic on LinkedIn, Twitter or Instagram.