Coming soon: New EU Artificial Intelligence Act

Digital
AI

Everyone is talking AI these days – and regulators are taking notice. The Artificial Intelligence Act (AI Act), recently adopted by the European Parliament, marks a significant regulatory step in the oversight of AI technologies within the European Union (EU). Some of the AI Act’s compliance dates are set for as early as August 2024, and the full Act is planned to be enforced by March 2026.

This landmark legislation offers a comprehensive framework for AI development and deployment, helping to ensure ethical use, safety, and transparency.

The EU AI Act’s implications extend across various economic sectors, including clinical research, where AI is increasingly utilised for tasks like medical image analysis, natural language process for endpoint analysis, and generating and analysing data for synthetic control arms. According to the National Institutes of Health, AI is often used in oncology and most often applied to patient recruitment.

How will the EU’s AI Act impact the implementation of software and systems used in clinical research? Here is what pharmaceutical companies and clinical research organisations (CROs) need to know to be prepared to fully – and safely – leverage this powerful technology both working in the EU and globally.

An overview of the AI Act

The new Act categorises AI applications based on four risk levels: unacceptable risk, high risk, limited risk, and minimal risk. These last two, limited and minimal risk systems (e.g., AI in benign gaming apps and language generators), face fewer regulations, but must still meet standards to ensure ethical use.

Unacceptable risk AI systems are banned outright, while high-risk systems must comply with stringent requirements, including transparency, data governance, registration with the central competent authorities, and human oversight.

Key requirements for “high risk” AI systems

Many of the AI-based systems used in modern clinical trials likely will be considered high risk by the AI Act - examples of these include drug discovery software, study feasibility solutions, patient recruitment tools, and more. Below is a summary of the key requirements for “high risk” AI systems in clinical trials (for a complete list, reference the full AI Act).

  • Transparency and explainability: AI systems must be transparent, meaning their decision-making processes should be explainable to healthcare professionals and patients. This supports the requirement that AI-driven determinations must be understood and trusted.
  • Data governance: High-risk AI systems must implement robust data governance measures, including data quality management, ensuring the data used for training and operating these systems is accurate, representative, and bias-free (Article 10).
  • Human oversight: The AI Act mandates human oversight as integral to the deployment of high-risk AI systems. In clinical settings, this requires healthcare professionals’ involvement, ensuring AI recommendations are reviewed and validated by human experts (Article 14).
  • Accuracy and reliability: The Act requires rigorous validation and documentation processes to prove AI models can accurately and consistently simulate control group outcomes, endpoint analysis, and more (Article 15).
  • Ethical considerations: AI must consider ethical implications, particularly regarding data privacy and consent. This requirement is especially germane to participant recruitment. The AI Act emphasises that AI systems should be designed and used in ways that respect fundamental rights and values (Article 38).
  • Continuous monitoring: AI systems used in clinical trials must be continuously monitored to ensure they remain accurate and effective over time. This includes ongoing assessment and recalibration of AI models as new data becomes available (Article 61).

Potential impact on clinical research

Industry organisations are increasingly leveraging AI in various ways to improve study performance and streamline drug development. Here is how the AI Act potentially impacts the adoption and utilisation of these emerging tools.

1. Analysing medical images and medical histories

One of the most transformative clinical research applications of AI is in medical image/history analysis. AI algorithms can process vast amounts of imaging and medical chart history data to detect anomalies, identify disease markers, and assist in diagnosis and endpoint identification with remarkable accuracy and speed.

Under the AI Act, medical image and history analysis systems are considered high risk, due to their potential impact on patient health and safety. This categorisation also considers their impact on endpoint adjudication analysis, which ultimately drives regulatory approval determinations.

2. Developing synthetic control arms

The use of AI to generate data for synthetic control arms in clinical trials is another area poised for significant impact and likely considered high risk. Synthetic Control Arms (SCAs) use historical clinical trials, healthcare records, and real-world evidence to simulate a control group, reducing the need for placebo groups and accelerating trial processes. Many argue they are a safe and efficient way to leap forward into real-word evidence.

Regulatory agencies are pushing for the use of real-world evidence – to accelerate approvals and reduce clinical trial complexity and cost. What happens, though, when AI technology ingests large datasets of real-world data and extrapolates what a hypothetical control arm of hypothetical patients would look like, giving aggregated massive datasets (i.e., a synthetic control arm)? While this SCA is based on real data, the challenge lies in how to trust the AI’s assumptions.

Regulators must consider how to verify the data provenance and the determinations and assumptions the AI made to generate the control data, as well as the implications those assumptions have on the result – drug or device approval. The AI Act helps put guardrails around this while encouraging SCA innovation.

3. Identifying patients faster

AI is also revolutionising the identification of patients for clinical trials: an increasingly challenging process crucial for clinical research success. Many of today’s trials analyse biomarkers – they are required in half of all oncology studies today and 16% of all others. And, while biomarkers make it easier to demonstrate a drug hit its target, they make it tougher to find participants that meet narrow criteria and require more data collection before and during the trial.

AI algorithms can quickly analyse vast datasets, including electronic health records (EHRs) and genomic data, to identify suitable study candidates with greater precision and efficiency. Under the AI Act, patient identification systems are likely considered high risk, due to their potential impact on patient health and privacy, so precautions still must be taken.

The regulation heard (and felt) around the world

Similar to the EU General Data Privacy Regulation (GDPR), the EU AI Act extends enforcement outside the EU Economic Zone. It has potentially significant implications for any company doing business within the EU, particularly those marketing AI-driven clinical research products and services within the EU. Non-EU companies must comply with the AI Act, too, if their AI systems are used in the EU market.

To prepare, non-EU companies should familiarise themselves with the Act and consider establishing an EU representative who can act as a liaison with EU regulatory bodies and oversee compliance.

The adoption of the AI Act by the European Parliament represents a pivotal moment in the regulation of AI technologies, particularly in high-stakes fields like clinical research. The Act’s emphasis on transparency, data governance, and human oversight aims to ensure safe and ethical use of AI, ultimately fostering greater trust and reliability in AI-driven clinical research. It’s likely this is just the beginning of AI regulation, so even companies not involved in EU business should take notice, as it may foreshadow future domestic policies.

Inside the EU or anywhere else in the world, now is the time to take proactive steps to understand the new regulations to continue to leverage the transformative potential of AI, while upholding the highest standards of safety, ethics, and efficacy.

5 steps to EU AI Act compliance

  1. Conduct an inventory and compliance assessment: List all current AI enhanced or supported systems and assess to determine each system’s risk classification under the AI Act. This audit should identify areas where existing systems may need upgrades or modifications to meet new regulatory requirements.
  2. Implement data governance protocols: Establish or enhance data governance frameworks to ensure the quality, representativeness, and security of data used in AI systems. This includes setting up processes for regular data audits and updates.
  3. Enhance transparency and explainability: Develop mechanisms to ensure AI systems are transparent and their decisions explainable. This may involve integrating user-friendly interfaces allowing healthcare professionals to understand and interpret AI outputs.
  4. Strengthen human oversight: Ensure AI systems are designed with robust human oversight mechanisms. This includes training healthcare professionals and researchers how to effectively supervise and validate AI decisions.
  5. Ethical and legal training: Provide training for staff on the ethical and legal implications of using AI in clinical research. This helps ensure all team members know their responsibilities and the importance of AI Act compliance.
Image
James Riddle
profile mask
James Riddle