Complex rules for AI in healthcare could put UK patient safety at risk

Digital
NHSX AI

The UK is investing heavily in artificial intelligence for healthcare, but faces several ‘pain points’ in its quest to make greater use of the technology, particularly from the current rules in place.

That was one of the conclusions of a wide-ranging new report from NHSX looking at how to best support safe, data-driven healthcare innovation from AI.

One of the areas in need of development is the current governance framework for AI technologies, whose complexity is “perhaps limiting innovation and potentially risking patient safety”.

The report, Artificial Intelligence: How to get it right, also has some preliminary results from a State of the Nation survey of digital health developers that was carried out earlier this year.

This found that most of the companies taking part were unaware of the commercial arrangement they had in place to gain access to patient data, and around 50% of developers did not seek ethical approval at the beginning of the development process.

“This is in part due to a lack of awareness: almost a third of respondents said they were either not developing in line with the Code of Conduct or were not sure. The main reason given for this was ‘I was unaware that it existed’,” the report said.

Furthermore, half of the developers surveyed said they were not intending to have their tools certified as a medical device by seeking a CE Mark, with the most saying it would not be applicable to their technology.

“This may be the result of a general misunderstanding as it is unclear in many cases whether or not ‘algorithms’ count as medical devices. This lack of certainty may even increase with new guidance coming into force in May 2020 and May 2022. A greater degree of clarity is required regarding the regulator requirements for ‘real AI’,” the report said.

Health secretary Matt Hancock and innovation minister Baroness Blackwood introducing the report said: “While the opportunities of AI are immense so too are the challenges. Much of the NHS is locked into ageing technology that struggles to install the latest update, never mind the latest AI tools, so we need a strong focus on fixing the basic infrastructure.

“As a society we need to agree the rules of the game. If we want people to trust this tech, then ethics, transparency and the founding values of the NHS have got to run through our AI policy like letters through a stick of rock.”

The new report sets out the policy work aimed to ensure this happens and underpins the UK’s forthcoming £250 million NHS AI Lab. Announced in August, the Lab will use the technology to improve areas such as the detection of diseases and automate administrative tasks in order to free up staff to care for patients.