What US healthcare companies and medical researchers need to know about Italy's new AI law
Italy-US collaboration in life sciences is accelerating, supported by initiatives such as the Italy-US Tech Business & Investment Matching Initiative, which promotes cross-border innovation in sectors including life sciences and AI. Therefore, Italian regulatory developments are increasingly relevant for US pharma, biotech, and medtech companies operating in Italy or partnering with Italian institutions.
So, it was significant when, in September 2025, Italy became the first EU Member State to enact comprehensive AI legislation, Law No. 132/2025 (“Italy’s AI Law”).
Although this law must be interpreted consistently with the pre-existing EU AI Act [1] – and states that it does not create new obligations – it introduces immediately applicable national principles, transparency rules, and sector-specific requirements with direct effects on medical practices and the healthcare sector. It also requires decrees and regulatory guidance by October 2026.
This means US healthcare companies and medical researchers face immediate and phased-in compliance obligations both under Italian and EU law.
This article outlines how Italy’s pioneering AI Law and the EU AI Act impacts US healthcare companies and medical researchers operating in Italy.
Patient rights and transparency requirements
Italy's AI Law mandates that patients be informed when AI systems are used in their care.[2] This has immediate implications for US healthcare AI products sold in Italy. Companies must build notification mechanisms, integrate them into clinical workflows, and supply Italian healthcare providers with compliant documentation.
Unlike US practice, where AI disclosure is often embedded in general consent forms or omitted, Italian law requires specific, clear notification. US companies should view this as both a compliance obligation and a competitive opportunity: products offering turnkey compliance solutions will be more attractive to Italian hospitals and physicians.
Non-discrimination and algorithmic fairness
Italy's AI Law explicitly prohibits AI-based discrimination in healthcare access and requires bias testing and validation for healthcare AI systems.[3] These provisions address concerns about algorithmic bias in diagnosis, treatment allocation, and insurance decisions.
For US companies, this presents challenges and opportunities. The law requires proactive bias auditing, rather than reactive complaint-based enforcement. Medical devices companies must not wait for the relevant EU AI Act provisions to come into force in August 2026, but should begin systematic testing across demographic subgroups, validate AI performance for diverse patient populations, and document fairness metrics before deployment in Italy.
Compliance becomes more complex when the EU AI Act applies. Unless the proposed amendments to the Medical Devices Regulation (No. 745/2017 – the "MDR") will become effective before, from August 2026, the largest majority of medical device software embedding AI systems must be certified by a notified body under the MDR. Because such software is currently classified as high-risk under the EU AI Act, notified bodies will assess compliance of such software with the requirements of both the AI Act and the MDR. This reflects the convergence of device safety standards with AI-specific fairness and transparency obligations.
US companies should therefore implement comprehensive bias testing methodologies, ensure representative training datasets, conduct subgroup analysis during validation, and establish ongoing monitoring systems to detect bias in deployed AI systems.
Human oversight and medical decision-making
Italy’s AI Law prohibits fully automated medical decisions, requiring that physicians retain ultimate decision-making authority. This requirement operates within the broader regulatory framework, which mandates the validation of software having medical purposes through a CE certification process overseen by notified bodies.
Healthcare professionals may rely on the outputs of CE-certified AI-based medical device software, provided they exercise appropriate clinical judgement. Under the MDR, manufacturers must take product liability insurance to cover potential patient harm, establishing a framework for risk allocation framework between manufacturers and healthcare providers.
Potential game-changer for US researchers: Medical research provisions and synthetic data
Italy's AI Law designates certain health-related AI research by public or private non-profit entities or by private entities in collaboration with public or private non-profit entities as being of "relevant public interest" under GDPR Article 9(2)(g).[4] This permits use of personal health data to train AI systems and the secondary use of de-identified health data without consent, subject to notification to the Italian Data Protection Authority (DPA) with a 30-day standstill period during which the DPA can object (Art. 8(5).
For US academic medical centres, research hospitals, and non-profit research institutions, these provisions could be transformative. Italy has historically required consent for most medical research, limiting large-scale health data analytics. The new pathway potentially opens access to rich Italian health datasets for AI research, comparative effectiveness studies, and algorithm validation.
Italy’s AI Law also represents the first legislation to address “synthetic data”, explicitly permitting the processing of personal data, including special categories of health data under GDPR Article 9, for anonymisation, pseudo-anonymisation, and synthetisation (creation of synthetic data).[5]
The law permits this processing in the medical research contexts described above, and for other research purposes, following provision of information to the data subject under GDPR. The National Agency for Regional Health Services (AGENAS), after consulting the DPA and considering international standards and technological developments, is tasked with establishing and updating guidelines for the procedures for anonymising personal data.
These synthetic data provisions offer researchers a powerful tool to transform real patient data into privacy-protective datasets that replicate statistical properties without containing actual personal information.
Practical questions remain. US entities must structure collaborations appropriately with Italian public or non-profit partners, navigate the DPA notification process, and establish compliant data transfer mechanisms. The utility of these provisions will depend on DPA interpretation implementing decrees, and the forthcoming AGENAS guidelines.
Criminal liability risks for US companies and their employees
Italy's AI Law establishes new criminal offenses, creating meaningful risks for US executives and employees. The law criminalises distribution of deepfakes, with penalties of one to five years imprisonment. In healthcare, this raises concerns about falsified medical images, or fake medical communications (fake images, videos, or voices).
The law also imposes criminal penalties for using AI systems to unlawfully extract copyrighted online content, potentially affecting AI developers who train models on medical literature or imaging databases without proper authorisation. US companies must conduct thorough due diligence on training data sources and document their legal basis for data use.
These offenses currently apply only to natural persons – meaning US directors, officers, and employees can face personal criminal liability in Italy. US companies should assess criminal risk exposure, implement robust compliance programs, and establish protocols to protect personnel operating in Italy.
Practical compliance roadmap
US healthcare organisations should take immediate action by (1) reviewing all AI systems used in or serving Italy; (2) conducting gap analyses focused on patient notification, human oversight, and non-discrimination; and (3) implementing patient notification protocols and physician oversight mechanisms for Italian operations. Medical researchers should identify Italian partners and prepare for DPA notification processes.
All US parties deploying AI in connection with Italy should carefully monitor implementing decisions from the Ministry of Health, relevant agencies, and guidance from the Italian DPA.
Italy's AI Law creates a complex, but actionable, landscape for US healthcare companies and researchers. Obligations are immediately enforceable, even as implementing decrees are expected over the year following its entrance into force.
As the first EU Member State implementation, Italy provides a preview of Europe's AI regulatory future. US organisations that master Italian requirements – particularly around algorithmic fairness and bias testing – will be better positioned for broader European compliance and competitive advantage.
References
[1] Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence.
[2] Italy’s AI Law, Article 7(3)
[3] Id., Articles 7(2) and (6).
[4] Id., Article 8(3).
[5] Id. Article 8(3)
About the authors
Jeremy Maltby is a partner at Portolano Cavallo, where he advises US, Italian, and international clients on complex regulatory, compliance, and dispute matters, with a strong cross-border focus on Italy. His practice includes white-collar investigations, corporate compliance, and technology-related legal risk, including AI, cybersecurity, and data privacy. Maltby has held various senior legal roles in the US Department of Justice and the White House Counsel’s Office, and previously practiced for 20 years at O’Melveny & Myers, where he served as managing partner of its Washington, DC office. Maltby works extensively with clients in life sciences and healthcare, helping companies navigate high-stakes governance and enforcement issues arising from new AI rules in Italy and the EU. Maltby received his AB, magna cum laude, in History and Literature from Harvard College, before earning his JD from Columbia Law School, where he was a Chancellor James Kent Scholar and an articles editor of the Columbia Law Review. Before entering private practice, Maltby served as a law clerk to Justice David H. Souter on the US Supreme Court.
Laura Liguori has been a partner at Portolano Cavallo since 2007, where she is one of Italy’s leading advisers on data protection, digital regulation, and emerging technology. For more than 25 years, Liguori has counselled Italian and international clients on cybersecurity, data protection, internet and e-commerce law, and AI, with deep experience in both compliance strategy and contentious matters. Her work is especially focused on digital media technology and life sciences, including clinical trial and data governance issues. She graduated cum laude from the Luiss Guido Carli of Rome in 1996, with a dissertation on the first Italian law on data protection and privacy. Liguori is currently immediate past president of the ITechLaw Association and vice president of the Women&Tech Association, being consistently ranked by top legal directories, including Legal 500 and Chambers, for her leadership in data protection and technology law.
Elisa Stefanini is a partner at Portolano Cavallo and co-head of its Life Sciences-Healthcare and Public Law teams. Consistently recognised by directories such as Chambers and Legal 500 for her life sciences regulatory practice, Stefanini advises on pharmaceuticals and medical devices, with particular experience in clinical trials, market access, and digital projects in the healthcare sector. She earned her law degree cum laude from Università Commerciale Luigi Bocconi in 2004, before completing a postgraduate diploma at the Academy of European Law in 2006 and receiving a PhD in Constitutional Law from the University of Milan in 2008. She is officer at the International Bar Association’s Healthcare & Life Sciences Law Committee and vice president at the Healthcare & Life Sciences Commission of the International Association of Joung Lawyers (Aija).
