Making real world evidence fit for purpose

Although real world evidence (RWE) is a hot topic in pharma right now, many still question whether it is acceptable to be used in regulatory decision making. A new report says that while RWE can be transformative for drug assessment, there’s still a need to define the rules for what kinds of RWE can actually be useful to developers and regulators.

The report, Trial designs using real‐world data: The changing landscape of the regulatory approval process, calls for the development of hybrid trial designs that combine the best parts of traditional randomised controlled trials (RCTs) and observational study designs to produce real‐world evidence (RWE) that can be used for regulatory approval.

At the same time, the authors say that it’s important to engage with regulatory authorities early to obtain alignment on objectives, inform the study design, and ensure the design and the data are fit‐for‐purpose before starting trials.

“We see a growing need to contextualise clinical trial data, particularly for niche conditions where it is not feasible and sometimes not ethical to randomise patients,” says Nancy Dreyer, chief scientific officer at IQVIA and co-author of the report, explaining the impetus behind the study.

“Currently there are no standards to determine when RWE is ‘regulatory grade’… There are also questions about the qualifications and purity of motives of those who conduct the data extraction, assembly and analysis”

“Without randomised treatment assignment, the choices for single-arm trials are to claim some benefit through indirect changes – like changes in a surrogate endpoint that are likely to drive clinical benefits based on what is known about disease pathology – or to compare the experience of patients in the single-arm trial to historical data for similar patients. We see growing interest in using contemporaneous data to provide benchmarks for single-arm trials and by payers to quantify the value of new medical products.”

It might seem that a simple solution would be to use comparator data from the vast stores of placebo data collected across therapeutic areas, but Dreyer says that it is not always so easy.

“It’s an appealing idea, since those trials have systematically collected clinical data on endpoints. But you can’t draw meaningful inferences from human data without considering the inclusion and exclusion criteria for each trial, since those criteria dictate the characteristics you see.

“Most clinical data warehouses don’t link the final protocols to the clinical data, which means you can’t tell why these patients were selected for study. So how useful are data collected from an esoteric patient population that can’t be well characterized? It’s never been helpful to make comparisons between apples and oranges, and that hasn’t changed!”

The report explores how hybrid study designs that include features of RCTs and studies with real‐world data (RWD) can combine the advantages of both to generate RWE that is fit for regulatory purposes, with the aim to advance understanding of how real-world evidence compares to information from RCTs.

Unsurprisingly, the authors saw clear proof that medicine is not practiced the same way across real world settings as when clinicians are following a common study protocol.

“It’s a strong reminder that, if we want to use external comparators from real-world settings, we need to make sure that our single-arm trials include at least some outcomes that are actually relevant and recorded in everyday clinical practice,” Dreyer says. “This is not to suggest abandoning surrogate endpoints, but rather to include in these early-stage trials some clinically relevant outcomes that are measurable in real-world data.”

Ensuring reliability

Nevertheless, Dreyer notes that there is still a lot of “well-justified” caution about using RWE, and little knowledge about how it will be judged or what data sources will satisfy regulatory standards.

“Currently there are no standards to determine when RWE is ‘regulatory grade’ or what would be required if a user wanted to pursue that route. For example, what documentation is needed? Do the endpoints need to be validated? When and how? By contacting the doctor? How confident can we be that patients take the prescriptions they filled? Can we trust patient-reported behaviors?

“There are also questions about the qualifications and purity of motives of those who conduct the data extraction, assembly and analysis. Fears abound, most of which can be addressed by good data stewardship coupled with clear information about data provenance and curation and analytics.”

She says that some basic agreements for how data sources should be qualified would go a long way toward ensuring RWE reliability.

“There is no certainty that data are what they purport to be, since there are always reasons why particular things are noted in a record or billed to an insurer. Similarly, RWD are rarely, if ever, 100% complete for all the data elements of interest. While audits could be performed and in some instances are warranted, we also need to be careful not to try to transform RWD to the same data collection standards as used for RCT data, since costs would skyrocket and we would lose many of the benefits that can be derived from access to large data.

“The basic requirements for a research-ready RWD source would be to produce a description about data provenance and how the data are curated. The primary focus should be on transparency in data and methods, which then can be used to support replication, which will improve our ability to use fit-for-purpose RWE. Just as replication of RCTs is informative, replication of RWE will establish the parameters of likely benefit or risk.”

Beyond that, Dreyer says, the devil is in the details — specifically, whether data are fit for a particular purpose or use.

“The biggest concern is the validity of the endpoint under study. I expect that regulators will want to see some justification of the validity of the outcomes in order to give the data serious consideration.”

The report says that early establishment of cross-disciplinary teams to provide expert judgment on the appropriate research question, study design etc. could be key to promoting further acceptance of RWE.

Dreyer notes that RWE is useful for shaping decisions by R&D, marketing and access groups, health economics and outcomes researchers, as well as to support medical affairs and commercialisation – all of whom could be part of such teams.

Looking to the future, Dreyer predicts that hybrid RWD/RCT trials will continue to become more common.

“Smart companies will conduct hybrid studies whenever possible, either getting most follow-up through RWD or, more likely, through supplementation of RCT data with RWE to facilitate broader evidence generation in the same study vehicle. People will start to think about RWE evidence platforms that combine clinical study data with health insurance claims, electronic medical records, genomic data, patient-reported outcomes, etc.

“We are already seeing drug approvals based on real-world outcomes like overall survival—a high bar, but hugely meaningful.  We’re also seeing regulators giving positive attention to well-constructed integrated evidence packages that use a variety of real-world data and methods to demonstrate benefit, like the recent recommendations from China’s NMPA.”

It seems, then, that it is only a matter of time before RWE becomes fully integrated into drug development and the HTA approval process – perhaps to an extent that the industry wonders what it ever did without it.