Harnessing AI to make Medicare more efficient, fair, and patient-centred
Earlier this year, the Centers for Medicare and Medicaid Services introduced the "Wasteful and Inappropriate Service Reduction" model, a series of requirements designed to ensure timely and appropriate Medicare payment for select items and services in six states (New Jersey, Ohio, Oklahoma, Texas, Arizona, and Washington) that take effect 1st January.
As part of the “WISeR” requirements, CMS selected tech vendors to implement enhanced technological models to scrutinise claims involving Medicare recipients, including the options for prior authorisations or post-service/pre-payment reviews. The programme set out to save money by targeting “a specific subset of items and services that may have little to no clinical benefit for certain patients and that historically have had a higher risk of waste, fraud, and abuse.”
Quickly, objections were raised. On 7th November, a group of six representatives introduced new legislation that would roll back the WISeR model, claiming their bill will "protect seniors from new red tape." In a press release explaining his opposition to the WISeR model, Rep. Ami Bera, MD writes that decisions involving patients’ health “should be made by doctors, not by algorithms designed to cut costs.”
Supporting, never replacing, human clinical judgement
Dr Bera is not alone in framing the WISeR debate around the use of technology (such as artificial intelligence (AI) or machine learning (ML) tools) in Utilisation Management decisions (such as prior authorisation). Yet, there is some risk that the debate over the particulars of one legislative effort will be extrapolated to every AI or ML use case being developed and deployed within the healthcare industry.
Taking a broader view, ML and other advanced analytics should be used in Medicare – but only to support, never replace, human clinical judgment. ML-enabled utilisation management (whether prior or concurrent authorisation) should be employed with two objectives:
- Improving patient outcomes and access to medically necessary care
- Reducing fraud, waste, and abuse in a transparent, accountable way
ML applied to utilisation management, especially in Original Medicare, should be designed to:
- Target clearly defined areas of low-value or potentially harmful care
- Maintain and strengthen beneficiary protections and appeal rights
- Reduce administrative burden on providers and suppliers
- Adhere to recognised AI risk frameworks and industry standards, such as the NIST AI Risk Management Framework (AI RMF 1.0).
The healthcare industry – payers and providers – are monitoring the current policy debate. CMS has stated that WISeR will use enhanced technologies, including ML, to streamline review of a narrow set of services associated with fraud, waste, and potential patient harm, while requiring human clinician review for non-affirmations and preserving appeal rights.
The need for a balanced, evidence-driven approach
Policymakers, provider groups, and advocates have raised concerns that expanded prior authorisation, especially when linked to private vendors and AI tools, could increase red tape, delay needed care, and replicate problematic patterns seen in Medicare Advantage, where many denials are overturned on appeal and physicians report substantial administrative burden and adverse events linked to prior authorisation.
Any balanced, evidence-driven approach must rest on the following pillars:
- Proceed with carefully scoped, transparent pilots of ML-enabled utilisation management in Original Medicare, focused on clearly documented low-value services
- Embed the safeguards described above (“human in the loop” review, strong appeals, robust transparency, and ongoing fairness monitoring) into the model design pre-deployment
- Commit to modifying or discontinuing any model if evaluations show unacceptable impacts on access to medically necessary care, health equity, or provider participation, even if nominal programme savings are achieved
Transparency is a central theme that carries through each pillar. In addition to “human in the loop,” there should be requirements for model cards and fact sheets that, at a minimum, include the model’s intended use, training data provenance, performance, subgroup metrics, limitations, and update cycles. Explainability tools should be included in the clinician user interface, such as SHAP and global interpretable outputs. Pre-deployment testing in safe environments with ongoing performance drift monitoring should be standard requirements.
The WISeR model includes among its goals: “increasing transparency on existing Medicare coverage policy.” But its commitment to run for six performance years, from 1st January 2026 to 31st December 2031, must include room for recourse, because no one can perfectly foresee the dynamics of AI/ML systems over time.
About the author
Gregg Killoren is General Counsel at Xsolis, specialising in regulatory strategy, corporate compliance, and legal risk management across healthcare and technology. He brings deep expertise in navigating complex legal frameworks and supporting high-growth, mission-driven organisations.
