Considering current legal risks from the use of AI in the life sciences sector

Digital
AI, the law, and the AI Act

The use of artificial intelligence (AI) in the life sciences sector is increasing, and its applications may be transformative.

AI is currently used with drug discovery, diagnostics, clinical care, health systems management and planning, public health surveillance, and disease control and prevention, as well as in general business applications, such as HR or accounting.

AI has already been used to predict the structure of almost any known protein - which may significantly advance the understanding of diseases - drugs have been successfully repurposed using AI, and there have been some promising results in trials of AI cancer screening tools.

Key risks and considerations

With great opportunity comes great responsibility. Businesses implementing AI should be aware of its risks, particularly in the life sciences sector, where there may be catastrophic and wide-ranging implications if AI systems malfunction, or worse, ‘go rogue’.

One of the main concerns includes the potential safety risk if AI issues incorrect recommendations, and in turn the lack of clarity as to where accountability should rest in this context. Privacy and intellectual property law also raises significant questions in relation to the collection and use of data to feed AI systems.

The potential discriminatory effects of AI systems working with unrepresentative or ‘biased’ data are another important consideration, in addition to the potential impact of AI in accentuating global disparities in health outcomes and, finally, the evident concerns that AI could bypass robust safety procedures.

Regulation of AI

The above concerns are reflected in plans for increased regulation of AI, although there is no consistent global approach. The UK Government is currently taking a principles-based approach, rather than (at least for now) passing specific laws to regulate the development and use of AI. This is broadly consistent with the US position. This contrasts with the approach of other jurisdictions, which favour specific AI regulation. The impending EU AI Act is arguably the most significant item of AI regulation currently on the table.

EU AI Act

The final text of the EU AI Act (the “Act”) is expected to be officially published in April 2024. The Act is intended to enter into force two years and 20 days after its publication, but certain measures will come into force at earlier or later dates. For example, certain ‘unacceptable risk’ AI practices will be banned from six months after publication.

Like the GDPR, the Act has a wide territorial application that could result in it becoming a global ‘gold standard’: it covers, for example, AI systems placed on the EU market and whose outputs are used in the EU.

Key implications of the Act for the life sciences sector include, firstly, that “AI systems and models specifically developed and put into service for the sole purpose of scientific research and development” are excluded from its scope.

Secondly, it is important to note that some of the uses of AI in the life sciences sector will be classed as ‘high-risk’, meaning enhanced requirements will apply to developers and users of such systems. For example, AI systems intended to be used as safety components of medical devices or in vitro diagnostic medical devices, or that are themselves such devices will be classified as ‘high-risk’, with some exceptions. Failure to comply with obligations in respect of high-risk systems can result in fines of up to EUR 15 million or 3% of annual worldwide turnover, whichever is higher.

Finally, where a product is subject to both the Act and existing product safety legislation - for example, with medical devices - both need to be complied with. Some flexibility may exist; for example, on carrying out conformity assessment procedures.

Guiding principles

The use of AI systems may become increasingly ‘mission-critical’ to businesses. An awareness and mitigation of the risks – including evolving regulatory risks - that the use of AI systems presents may become correspondingly important. The following questions provide a starting point for evaluating the use of AI by businesses:

What AI is being used?

Do we understand what AI is being used in the business? Many existing technologies may fall into a broad definition of AI; for example, as set out in the Act. Do we understand how the AI ‘works’, and can this be explained to people using it or who might be affected by its decisions? If not, there may, for example, be risks that an event which gives rise to liability goes undetected for a significant period, or that the system is mistakenly used in a way which gives rise to liability.

Why is the AI being used?

Does the use of AI provide defined benefits, and has the AI undergone testing to a level where there is confidence that it will deliver those benefits?

Can the use of AI be justified?

Are the potential benefits of AI outweighed by harms that could result from its use? In particular, it may be necessary to consider in some detail the privacy and data protection implications from the use of AI systems, which are under increasing regulatory scrutiny. Conducting diligence on how personal data has been obtained and what de-identification measures have been applied may become important in AI procurement. Strong privacy safeguards – including cybersecurity measures – may also need to be considered when using AI, especially where this involves feeding ‘special category’ medical personal data into AI systems.

Are we aware of the risks, and what steps can we take to mitigate these?

As well as risks that the use of AI systems could give rise to claims from individuals affected by outputs and data protection issues, the use of AI systems might also create legal risks of breaches of, for example, IP law or equality legislation. Exposure under AI-specific regulation, such as the Act, may also need to be taken into account. Commercial risks associated with over-reliance on a system may also need to be managed.

How will we ensure responsible use on an ongoing basis?

How will we document appropriate use of the AI system, and what policies and training will be implemented to ensure responsible use?

Conclusion

To summarise, using AI evidently has its benefits, but it should be used with caution. With the ever-evolving nature of the technology, a business’s internal processes and compliance reviews will need to be dynamic to ensure compliance with all relevant legislation and regulation.

About the authors

Gustaf DuhsGustaf Duhs is partner and head of the competition and regulatory practice at UK law firm Stevens & Bolton LLP.

Jeremy KellyJeremy Kelly is an associate and regulatory specialist at UK law firm Stevens & Bolton LLP.

Jessica GregsonJessica Gregson is a trainee at UK law firm Stevens & Bolton LLP.