Time to build evidence for patient involvement in research
Patient and public involvement (PPI) may be increasingly seen as the “right thing to do”, but without systematic evaluation how do we stop good intentions becoming tick-box tokenism?
Doing PPI “just for the sake of it” can discourage researchers, disenfranchise those who get involved and have unforeseen negative consequences.
That’s according to a new report from The Healthcare Improvement Studies (THIS) Institute, Involving Patients and the Public in Research.
Based on a review of studies published from 2000 to 2018 and interviews with experts in the field, the report calls on the research community to build a better understanding of the role of PPI.
“Practical guidelines on how to do PPI have… been published, and a body of academic literature about PPI is emerging. Yet PPI remains a relatively new field of enquiry, leaving questions about when, why, and how patients and the public can best be involved in research,” say the authors.
“One challenge in answering these questions is that the impacts of PPI on the research process, research outcomes, and the people who get involved, are often not evaluated.”
There are also concerns, it went on, that what is advocated as good PPI practice isn’t always feasible, leading to the process becoming little more than a tick-box exercise.
There’s a growing demand for active patient engagement but embedding it into the research process isn’t easy. It isn’t always funded properly or evaluated effectively, and best practice isn’t always shared in a co-ordinated manner.
Values and attitudes can sometimes hinder involvement, such as researchers being dismissive or tokenistic, and ensuring representation is another area of concern.
“People thinking about getting involved may lack the confidence and experience to do so, while those who do get involved may risk becoming over-professionalised and losing their lay person perspective,” says the report.
“Researchers often strive to ensure PPI contributors collectively reflect the diversity of society in line with the needs of the research project, but it can be difficult to achieve.”
Many of these challenges, the authors argue, are deep rooted, and overcoming them requires a change in cultures, structures, attitudes and expectations.
Building the evidence
Achieving that shift must start with an understanding of what ‘good’ PPI looks like and an appreciation of what it can achieve.
The THIS Institute’s review of the literature, however, found the evidence base for the impact of patient involvement was “piecemeal and inconclusive”.
“Our review identified a number of potential positive impacts of PPI, including benefits for the people who contribute, the research study, and the wider research system… (but) many studies focused on PPI’s potential, reporting assumptions or perceived impacts, rather than evidence from evaluations,” it says.
While most studies reported positive impacts, either actual or potential, the team also uncovered some unintended, negative consequences of PPI.
A Canadian project that use a PPI approach to design studies on improving services for low-income families, for example, caused delays and strained relationships between collaborators.
“When PPI is not done well, patients can be left feeling that they are not valued or listened to,” the authors say.
“On the other side, researchers who feel they are mandated to involve patients and the public even when they do not see the value of involvement, and perhaps when it is not appropriate for the project at hand, may lose motivation and end up being tokenistic.”
Time to build the evidence
Evaluation, then, is key to understanding how, why and when to implement PPI.
“Our review found little evidence about the impact of PPI. But that does not mean PPI has little impact. Instead, it suggests a lack of high-quality evaluations of the impact of PPI activities,” explains the institute’s report, which was backed by The Health Foundation.
While some have argued patients and the public should be involved regardless of their impact simply because it is “the right thing to do”, others say today’s plethora of publicly funded PPI activities make a clear justification for evaluation.
The report, however, suggested that evaluation should be used to learn how to improve future PPI efforts, ensure methods are replicable, and contribute to the wider evidence base.
It goes on: “To evaluate PPI effectively, it is important to be clear about what [it] is expected to achieve, how quality should be evaluated, and how impact should be assessed.
“Some agreement is also needed on the types of… impact that are worth measuring, and what sort of study designs are appropriate. So far, these questions have been contentious.”
Answering the unanswered
PPI can improve research and empower contributors, but, according to this report, the evidence on how that happens, to what extent, and to what effect, is limited.
And this lack of evidence-based guidance could damage the whole enterprise.
“Patients and the public are diverse, as are the topics of research, so taking a one-size-fits-all approach rarely works,” say the authors.
“But with careful consideration of when to do PPI, in what capacity, and toward what end – for the research and for contributors – all sides can benefit from bringing real-world understandings into research about healthcare.”
The movement, they believe, will only fulfil its potential if we understand what that potential is, and if we learn from our mistakes.
Read the full report here.