10 legal risks to consider when implementing AI in pharma supply chains
Life sciences companies around the globe are grappling with the question of whether and how to deploy generative artificial intelligence (GenAI) tools, including use in streamlining company processes.
Documenting and visualising the flow of materials, information and money throughout supply chains can help improve efficiency and profitability, enhance decision making, and improve customer satisfaction.
However, implementing a new GenAI tool requires careful planning and consideration from a commercial, technical and legal perspective. In this article we consider ten key regulatory and legal issues that companies should consider when doing so.
Security
The large amounts of sensitive commercial data in a supply chain (customer data, financial information, IP, etc.), combined with the possibility of causing significant disruption, make supply chains an enticing target for cyberattacks.
Contracts with AI system providers should include robust cybersecurity measures such as encryption and access controls, and companies should build in regular security audits, in order to protect sensitive data and avoid system disruption. Many GenAI vendors will seek broad rights to use the prompts that their customers enter into GenAI tools, including for use in improving the GenAI tool, training future GenAI tools, or providing services to other customers.
Depending on the nature of the GenAI tool, the prompts entered into GenAI tools may contain company trade secrets, protected health information, or other sensitive data, so it may be appropriate to restrict the GenAI from storing and using those prompts for any purpose other than to provide the necessary service to the customer.
Even where a GenAI vendor is given rights to use customer prompts for other purposes, the confidentiality of those prompts should be maintained so that portions of those prompts are not incorporated into future outputs for other customers.
Data protection
Most companies are sensitive to the risks of failing to comply with data protection rules in their day to day business, but ensuring that AI tools process personal data lawfully poses challenges that need to be assessed and documented, including the security considerations discussed above.
Given the uncertainties and risks associated with AI, EU and UK regulators are likely to require full data protection impact assessments if AI systems are used to process personal data. Affected individuals should be informed how their data is being used and be able to exercise the rights given to them by the data protection legislation.
Companies should carefully consider the types of personal data processed by their AI systems and implement appropriate data protection measures to ensure compliance with applicable data protection laws. If third parties are involved in the processing of personal data (e.g., GenAI vendors), such processing must be governed by detailed data protection agreement provisions and related warranties, representations and indemnification.
Impact of bias and errors
AI systems make decisions based on statistics and probability, so can be wrong or display bias even if trained on accurate and representative data.
In a supply chain context incorrect decisions can pose a variety of legal risks, from simple breach of contract if products are delivered late, to regulatory issues if a company is found to have breached its obligation to ensure appropriate and continued supplies of a medicinal product or to comply with national safety stocks requirements, to competition issues if an AI system discriminates between customers or facilitates price fixing.
Companies should ensure that they clean the data sets that will be used to train the GenAI system, implement robust testing and monitoring procedures to identify and mitigate biases, and develop contingency plans to address potential errors or incorrect decisions.
Product liability
As the EU’s Product Liability Directive is being extended to algorithms and other software (and a potential AI Liability Directive being debated), AI developers may face product liability claims if their AI tools are found to be defective. They will likely seek contractual protections from their customers for these risks, but customers should also ensure that the AI companies hold appropriate insurance cover for direct claims against them and confirm their compliance with relevant regulatory requirements.
In turn, customers should carefully assess the potential liability risks associated with the AI systems they are implementing and consider purchasing appropriate insurance coverage and whether use of the AI system may impact regulatory compliance.
Promotional materials
Ensuring that marketing materials comply with the strict rules around promotion of medicinal products is a resource-intensive process for many life sciences companies, and one that is ripe for streamlining using AI.
However, companies will remain liable for any unsupported or misleading claims included in AI-generated content or that are not spotted in AI reviewed materials and it will still be necessary to ensure they go through relevant certification processes, and that there is proper (and accurate) substantiation of claims made.
Therefore, careful consideration should be given to the design and implementation of AI systems for this purpose, including appropriate validation and human oversight.
Contractual protections
GenAI vendors will typically seek to include contractual protections in order to limit their liability to users of their systems. Conversely users will want to ensure that the vendor they are engaging with has sufficient ‘skin in the game’ to incentivise good performance, while not taking on so much exposure across multiple users as to pose a solvency risk.
As GenAI systems may have multiple developers, all of whom may modify or fine tune the system from time to time, establishing who in the AI ecosystem should be liable for particular issues, and to what extent, is not always straightforward. Similarly, users of an AI system should consider whether and to what extent they should limit their own contractual liability to customers and other downstream users as a result of any failures in the AI system.
Carefully drafted contracts are essential to manage risks and allocate responsibilities between the parties involved in the development and use of AI systems. Given the number of lawsuits that have been filed challenging the training and development of GenAI tools, it is important for customers to ensure that GenAI vendors provide robust representations and warranties regarding the accuracy and reliability of their GenAI tools and the vendors’ right and ability to provide their GenAI tools. Those representations and warranties should be backed up by adequate indemnities and other remedies in the event of a breach.
Customers should ensure that any indemnities or other remedies offered by GenAI vendors are not capped by limitations of liability (or are subject to a reasonable cap given the magnitude of the potential exposure) and are not subject to so many exclusions as to be ineffectual in addressing the most likely scenarios. It is also important to ensure that the GenAI vendors are aware of the use of the GenAI tools within the medicinal product supply chain and it is clear between the parties where responsibility for regulatory compliance falls.
Regardless of the remedies offered by the GenAI vendor, life sciences companies should develop responsible backup plans in the event any GenAI tool becomes unavailable, whether because of a third-party lawsuit, change in the law, widespread technological outage (such as the one caused by the recent Crowdstrike update), or other force majeure.
Contractual certainty
GenAI tools may be offered under different terms – e.g., “free”, “consumer”, or “enterprise” terms – with varying levels of protection for the customer. These terms often incorporate online terms and policies that vary over time. Companies should ensure that they are provided with copies of these terms and any subsequent iterations of them and ensure that their negotiated terms control in the event of any inconsistencies.
Some GenAI tools are built on third-party foundation models not controlled by the vendor. These models are subject to their own terms and conditions, and so it is critical that life sciences companies understand whether a GenAI tool being considered for deployment was built on a third-party foundation model and, if so, what terms and conditions govern use of that foundation model, whether the GenAI tool terms being offered are consistent with the foundation model terms and conditions, and whether the GenAI vendor has developed and is deploying the GenAI tool in compliance with the foundation model terms and conditions.
Intellectual property
GenAI systems may generate new intellectual property, including patentable innovations and copyright in AI generated content.
While there is some debate over whether or not this is patentable or copyrightable under current standards, life sciences companies should still be clear as to what rights they and the GenAI vendor, respectively, have to use that output, and include clear contractual provisions to address the ownership, prosecution or enforcement of intellectual property generated by GenAI systems.
At a minimum, customers should ensure that they have sufficient rights to use the GenAI output for any intended uses, whether or not they “own” the outputs.
Evolving regulatory regime
Companies in the life sciences sector are accustomed to operating in a heavily regulated environment, but the speed of development of AI tools, and the corresponding attempts to regulate them, are unprecedented. In the life sciences space, regulation of AI is over and above the regulation that is already in place over the supply chain.
Given the potential for significant fines arising from non compliance (the new EU AI Act, for example, sets out fines of up to 7% of global turnover, although this level of fine is unlikely to apply to use of GenAI in supply chains) as well as the risk for reputational damage, companies should monitor the regulatory environment and require their GenAI vendors to confirm compliance with the relevant regime, inform them of any audits or investigations, to provide any necessary information and assistance to enable them to respond to any audits or investigations, and where possible to coordinate the GenAI vendor’s responses.
Transparency
Should users of AI tools tell their customers and stakeholders about their use of AI? In addition to data protection transparency and disclosure requirements (e.g., automated decision-making), ethical concerns and regulatory requirements relating to transparency, publicly listed companies should consider whether the risks associated with use of AI represent material risks that need to be disclosed to investors.
In house and external legal teams play a key role in mitigating these various risks and challenges by, for example:
- Reviewing how GenAI is used within the supply chain, who are the providers and the role of the company, and how the tools may be regulated;
- Developing and updating thorough GenAI policies to control how and when GenAI tools are used in the business;
- Training relevant personnel on the risks associated with using GenAI tools;
- Conducting thorough due diligence and risk assessments of proposed GenAI vendors and tools before deploying/using them;
- Updating model form contracts and processes used by procurement functions, as well data privacy and security arrangements, to reflect the above risks; and
- Staying up to date on the latest regulations and guidelines in this fast moving area.
About the author
Ewan Townsend, partner at Arnold & Porter
Ewan Townsend's practice focusses on the commercial transactions involved in the development, exploitation and commercialisation of medicinal products, representing many of the world's most sophisticated biotechnology and pharmaceutical companies.
His experience includes structuring, drafting and negotiating a wide range of commercial agreements including licence and collaboration agreements, manufacturing, distribution and supply agreements, clinical trial agreements, services agreements and co-promotion/co-development arrangements.
Supercharge your pharma insights: Sign up to pharmaphorum's newsletter for daily updates, weekly roundups, and in-depth analysis across all industry sectors.
Want to go deeper?
Continue your journey with these related reads from across pharmaphorum
Click on either of the images below for more articles from this edition of Deep Dive: Patients & Partnerships 2024