Is AI the missing link in fixing MLR – or the reason it breaks?

Market Access
missing piece

Somewhere in a shared drive at a mid-size pharma company, there is a folder labelled “Q3 Messaging FINAL v3 (2).” Inside it are three Word documents with tracked changes from four people, none of whom still work in the department. One of those documents contains the approved efficacy language for the company’s most recent product launch. The other two contain earlier drafts that were never formally retired. Nobody is entirely sure which is current.

This is not an edge case. This is the industry’s default operating model for managing approved messaging. And it is the real reason that medical, legal, and regulatory (MLR) review cycles take weeks instead of days.

The conversation around AI and MLR has focused almost entirely on making the review stage faster: scanning drafts for compliance issues, flagging off-label language, and automating editorial checks. These are valuable capabilities. Moderna recently became the first company to go live with AI-powered pre-review agents in Veeva PromoMats, and a focus group of leaders from 10 biopharmas projected that 38% of the MLR process will be AI-driven by 2028. The trajectory is clear.

But there is a structural problem that faster review cannot solve. The drafts arriving at MLR aren’t non-compliant because reviewers are slow, but because the inputs to the review process are ungoverned. Approved messaging lives in shared drives, email chains, and the institutional memory of senior team members. Content creators – whether human or AI – draft against sources they cannot verify as current, complete, or authorised. And the review team spends its expertise catching problems that should have been prevented at authorship, not discovered at review.

This distinction matters more in 2026 than it ever has, for two reasons.

FDA is watching more closely – and using AI to do it

The enforcement landscape shifted dramatically in 2025. FDA issued over 200 enforcement letters challenging pharmaceutical advertising and promotion – the highest annual total in nearly 25 years. Of these, 74 were directed specifically at pharmaceutical and biologic manufacturers, and the vast majority were issued after a single date: 9th September 2025, when the Office of Prescription Drug Promotion (OPDP) launched an unprecedented wave of action.

The violations were familiar: omission or minimisation of risk information, unsubstantiated efficacy claims, misleading ad presentations. But two elements were new. First, the FDA explicitly announced that it used AI and other tech-enabled tools to proactively surveil and review drug advertisements. The regulator is now using the same technology that content teams are adopting – but pointed in the other direction.

Second, OPDP itself is in flux. Its Policy Division was eliminated in April 2025, senior leaders departed, and the capacity to issue new guidance has been constrained. The result is an environment where enforcement is intensifying even as advisory clarity is declining. Companies face more unpredictable regulatory scrutiny with less advanced guidance on expectations.

In this environment, the question for communications and compliance teams is not whether they can review materials faster. It is whether they can demonstrate, with an auditable trail, that every external-facing claim traces to an authorised source. That is a governance problem, not a review speed problem.

Generative AI multiplies the problem before review tools can catch it

The promise of generative AI in pharma content is compelling: draft materials faster, personalise across channels, and scale production without proportionally scaling headcount. But generative AI introduces a specific kind of compliance risk that traditional review processes were not designed to detect.

When a human copywriter paraphrases an approved claim, the drift is typically small and follows predictable patterns that experienced reviewers recognise. When a large language model paraphrases the same claim, it optimises for fluency and coherence – not fidelity to approved language. The result is semantic drift: subtle restatements that change the meaning of a claim without obviously contradicting it. A clinical benefit is overstated by a single adjective, and a comparative statement that was carefully bounded becomes an absolute claim.

This is not a hypothetical risk. McKinsey reports that some pharma companies have reduced regulatory submission timelines by 50-65% through AI-enabled automation and workflow redesign. But as content volumes increase and generation accelerates, the number of materials entering MLR review is growing faster than review capacity. The industry’s own 2025 benchmarks show that promotional material production rose 29% year-over-year in the US alone, while 77% of approved content is rarely or never used by field teams. Volume is rising; quality of inputs is not keeping pace.

An MIT report cited across the industry found that 95% of generative AI pilots at companies are failing. A pharmaphorum editorial argued persuasively that 2026 is less a breakthrough year for AI in pharma and more of a reckoning – one where governance, trust, and data provenance become the currency that matters.

The emerging consensus among compliance leaders is correct: AI must be used judiciously and in partnership with human review. But “human oversight” is only effective if the humans have something reliable to oversee against. And that requires governing the approved messaging itself, as well as the materials derived from it.

The infrastructure gap: nobody governs the claim itself

Consider the technology stack that a typical pharma communications team uses today. A digital asset management (DAM) system stores and distributes brand assets – logos, templates, images. A sales enablement platform distributes content and tracks engagement. An AI writing assistant enforces style guides and grammar. A regulatory workflow tool like Veeva Vault PromoMats routes materials through MLR review.

None of these systems manages the approved claim as a structured, governable asset. The DAM manages files. The sales enablement tool manages content performance. The writing assistant manages style. The regulatory workflow manages the review process. But the foundational unit of regulated communication – the approved statement, paired with its evidence basis, its audience restrictions, its explicitly forbidden variations, and its full provenance chain – exists nowhere in the stack as a first-class entity.

This is the structural gap. And it explains why faster review tools, while valuable, cannot solve the compliance bottleneck by themselves. If the approved language that content creators are working from is scattered, unversioned, and unverifiable, then every draft that enters review carries inherited uncertainty. The reviewer is not just checking the draft; they are reconstructing the provenance of every claim from memory, shared drives, and email threads. This is skilled, expensive labour being applied to a problem that should not exist.

What a governed messaging infrastructure looks like

The emerging concept – one that several leading organisations are beginning to pilot – is to treat approved messaging as structured infrastructure, not as a collection of documents. Think of it as version control for language, analogous to what GitHub did for code.

In a governed messaging model, each approved claim is a structured object. It carries its authorised text, its forbidden variations (the specific misstatements that content creators and AI systems are most likely to produce), its evidence sources linked to clinical data or labelling, its audience and jurisdiction restrictions, and a complete audit trail of who created it, who approved it, and what changed at each version. When a content creator – human or AI – drafts a new asset, the system can verify in real time whether the draft language falls within the approved boundaries or has drifted into prohibited territory.

This verification layer needs to be deterministic, not probabilistic. In a regulated environment, “the AI said it was fine” is not an acceptable answer to an FDA inspector. Industry analysts are explicitly calling for deterministic algorithms that produce the same output for the same input, noting that non-deterministic systems cannot explain their reasoning and are unacceptable where reproducibility and traceability are non-negotiable. The scanning logic must be auditable, repeatable, and explainable – augmenting human reviewers with evidence, not replacing their judgment with confidence scores.

The practical impact is straightforward. When drafts are created from a governed messaging library and pre-scanned against deterministic rules before entering review, MLR cycle times drop, not because reviewers work faster, but because they receive cleaner inputs. The reviewer’s role shifts from “find the problems” to “validate the system’s findings.”

One major platform vendor has reported that clients see up to 50% fewer content reviews and 86% fewer submission errors when claims are pre-identified and managed as structured assets. These gains come from governing the message upstream, not from accelerating the review downstream.

From gatekeepers to architects: the MLR team’s new role

The most forward-thinking organisations are already shifting MLR from a downstream checkpoint to an upstream design function. Rather than reviewing every finished asset, MLR professionals help define the boundaries – the approved claims, the forbidden variations, the evidence thresholds – that govern what can be created in the first place.

This is not a reduction in MLR’s authority. It is an expansion. When MLR architects the governance framework, their expertise is encoded into the system itself. Every draft, whether created by a junior copywriter or a generative AI agent, is constrained by rules that MLR defined. The reviewer then focuses their time on the highest-risk, most nuanced content – the edge cases where human judgment is irreplaceable – rather than catching routine errors that a governed system would have prevented.

PwC’s recent analysis of AI-powered pharma content operations recommends exactly this model: risk-based content tiering where low-risk checks are handled systematically while medium- and high-risk content receives dedicated human scrutiny. But this only works if the “rules” being applied are grounded in a governed messaging library, not in individual reviewers’ memories of what was approved three quarters ago.

The next frontier: governing what AI says about you

There is an additional dimension that most MLR discussions have not yet addressed. Approved messaging governance is not only relevant to content that your organisation creates. It is increasingly relevant to content that AI systems create about your organisation.

When a surgeon asks ChatGPT about a surgical robotics platform, or a patient queries Perplexity about medication interactions, those AI systems generate responses by paraphrasing, compressing, and redistributing published information. The organisation has no visibility into whether these AI-generated representations are faithful to approved labelling, whether clinical claims are accurately stated, or whether required safety information is included.

A governed messaging library – the same infrastructure that prevents internal content drift – becomes the baseline against which AI-mediated representations can be monitored and measured. Organisations that build this internal governance capability will be the first to extend it outward, tracking how the broader AI ecosystem represents their products and taking corrective action when representations drift from approved language.

This is not a distant future. It is a question board members and CEOs are asking now: “When someone asks an AI about our product, what does it say? Is it accurate?” The organisations that can answer with data rather than a shrug will have a meaningful advantage.

The bottom line

AI will not replace MLR review, nor should it. But AI is forcing a reckoning with a problem the industry has tolerated for too long: the absence of governed messaging infrastructure. Faster review tools are necessary but insufficient. The companies that will navigate the next era of regulatory scrutiny most effectively will be those that govern the approved claim itself – as a structured, versionable, scannable asset with full provenance – and use that governance to ensure cleaner inputs to MLR, not just faster processing of messy ones.

The question for every pharma communications leader is not “how do we speed up review?” It is: “Do we have a single, auditable answer to the question ‘who authorised this claim?’” If the honest answer is “not reliably”, and for most organisations, it is, then the work starts upstream of review, not at review.

The technology to build this infrastructure exists today. The methodology is well understood. The regulatory environment is demanding it. What remains is the organisational decision to treat approved messaging as what it has always been – regulated infrastructure – and to manage it accordingly.

About the author

Image
Abhi Basu

Abhi Basu is the founder of Linguistic Engineering, a discipline and platform for governed messaging in regulated industries. With 20 years of experience in healthcare communications, including leadership roles at Johnson & Johnson, Takeda, and leading health communications agencies, he writes about the intersection of AI, communications governance, and narrative fidelity. His work has been recognised across the pharma and MedTech sectors for reframing how organisations manage language in the age of AI.

Image
Abhi Basu
profile mask