This was the motion of a debate which took place at the end of the recent PhysioUK2015 Conference in Liverpool. There was a lot of hype about this, and then it happened. I thought it worked well, these things are always a bit of a gamble. So a huge congratulations to Ralph Hammond, Steve Tolan, and Carley King for constructing this session. Here’s a bit about how it worked:
The debate panel
There were two speakers ‘for’ the motion and two ‘against’.
‘For the motion’ were:
Professor Sallie Lamb, Nuffield Department of Orthopaedics, Rheumatology & Musculoskeletal Sciences, University of Oxford
Professor Rob de Bie, Maastricht University, Holland.
‘Against the motion’ were:
Professor Michael Loughlin, Professor of Applied Philosophy, MMU
The debate was successfully chaired by the lovely ex-Chair of the CSP, Helena Johnson. I am glad I was a speaker, and not the chairperson.
Each speaker spoke for about 7 minutes, and then Helena chaired a question and answer session, taking questions from the audience, and via Twitter. This went on for some time. At the end, Helena called for a show of hands ‘for’ and ‘against’. The ‘againsts’ ‘won’.
And then the fun started.
I suspected that if the ‘againsts’ won, there would be a misinterpretation of what was being debated. Sure enough, social media and face-to-face feedback confirmed this. There seemed to be a feeling of “great, the vote went against the motion, and so that means research doesn’t matter”. Let me try and put the record straight. First, re-read the motion. This is not about whether research matters or not, it is about a detailed point of evidence-based practice – could there be situations where it is acceptable within an evidence-based practice framework to provide a therapeutic intervention which isn’t supported by research evidence? Note that this is also not about using interventions which have research evidence demonstrating a lack of effectiveness. It’s just about absence of evidence.
So, the point is one of detail, and one which promotes polarisation, such is the aim of a well-considered debate. The point, as per a couple of Facebook comments before the event, could be considered moot. But I don’t think so. I think the motion forces us to think a little harder about the relation between research evidence and clinical practice, and I think a few important themes emerged from the debate. I’ll now summarise each speaker’s argument, and highlight what I think are some important issues for physiotherapy and how it conducts itself.
FOR 1 Sallie Lamb: RCTs are gold standard ; we do not know what harms and what heals without RCT evidence; therefore all interventions should have at least RCT-level evidence before their integration into clinical practice. To be taken seriously, our profession needs to get its head out of the sand and start to justify use of therapeutic interventions with RCT-level evidence.
AGAINST 1 Me: Some interventions have such large effect sizes and/or are impossible/unethical to trial that they may be used in the absence of research evidence. I used an example of a therapist intervening to stop a patient falling down stairs. This is known as the paradox of effectiveness, and other examples include the Heimlich manoeuvre, anaesthetics, and parachutes. I also argued that unlike medicine where new drugs are developed in pharma labs, physiotherapy interventions are developed on the clinical floor. If this wasn’t allowed to happen, then we will soon run dry of interventions to trial and the end of given that effect sizes eventually converge to zero, the end of our profession will be nigh (for dramatic effect).
FOR 2 Rob de Bie: Re-enforced Sallie’s argument, using more examples of where human observation errors had led us to believe something to be the case, then eventually RCTs show something different. Take home: again, RCTs are the best ways of understanding the level of effectiveness of an intervention.
AGAINST 2 Michael Loughlin: Set out a broad social and professional picture of the history of evidence-based medicine (EBM) and highlighted historical and contemporary challenges. He pointed towards poor and ill-presented arguments used by strict proponents of EBM, who have relied on platitude, caricature & ridicule in enforcing research evidence within clinical practice. This has masked the limitations and failures of EBM, which are currently being highlighted by a renaissance movement, also. Further, he advocated a broadening of the concept of ‘evidence’, to include much more than the outcomes of clinical research. To be truly professional, physiotherapy should question what it is buying into, and as its stands, it doesn’t.
Note that none of the ‘against’ arguments were against the idea that practice should be based on the best evidence. The question is “what is the best available evidence to inform therapeutic decisions?” I was asking this in terms of examples of clinical practice; Michael was asking this in terms of what a wholesale buy-in to a movement which has so far failed to provide satisfactory explanations of its own commitments to evidential sources means. I can’t expand on the arguments of anyone but my own form now on for fear of misrepresenting them, so I’ll just try and explain myself. Note: ‘evidence’ should be thought of as ‘the available body of facts or information indicating whether a belief or proposition is true or valid’. You could spend a few years analysing the vast literature about the definition of evidence, but you will come back to this OED version.
Paradox of effectiveness
I used this example of a therapeutic intervention.
Therapist preventing patient from falling – a paradox of effectiveness
Extreme/silly/obvious/obtuse/whatever, I presented it in this way to highlight an important point of about what is the best available evidence for a therapeutic decision
. Note that this argument has often been used for the case against evidence based practice.
This presents the paradox as a straw man, and this IS NOT what I was doing, in fact the opposite. Kenny Venere has provided a great overview of the traditional use of the PoE
argument – do read this
. My example set out to merely illustrate that in this case it seems intuitive to say the therapist did the right thing, despite there being no research evidence to suggest that it was the right thing.
there ain’t no research evidence for this
So then, on what grounds was it the right thing? The intuition arises because the suspected effect size is so large – it seems like the therapist has prevented the patient from falling down stairs and risking possible significant injury. But we don’t know that, in any systematic way as would be desired by a commitment to research. Of course, we may have seen people fall downstairs in the past, but as EBM is founded on the premise that human observations are bias, we cannot trust that this is actually the case. If we did, the whole business of EBM would collapse.
Facetious alert: yes, I am being facetious here, but I’m playing the EBM game. The ‘human bias’ argument is all too often presented to support the integration of systematic research data into clinical decision making. Not only does this play its part in some form of fallacious kettle logic, along with slogans such as “well what’s the alternative”, it is still unclear as to the extent of human biases in real-life reasoning. Most of the data to support the ‘human bias’ argument comes from tests in experimental situations, which have themselves been shown to artificially increase the degree of biases. When white noise is reduced, and content knowledge is increased, human bias error is often insignificant. Reducing noise and increasing knowledge is what physiotherapy education teaches us to do. Anyhoo, back to the story…
So, to say something like the therapist’s actions are based on existing observational evidence which compared what did happen to a situation where it didn’t happen is a non-starter. So if the evidence (‘the available body of facts or information indicating whether a belief or proposition is true or valid’) does not or could not come from an external source, then it must come from an internal source, internal to that therapeutic alliance. This then, is not research evidence.
This internal evidence is an emergent feature of the physical, psychological, and social interactions of two human beings. It is a complex and non-linear process. That is to say that the therapist is not consciously placing a series of discrete events in a temporal order (even though a posteriori analysis could reduce it to such, i.e. a patient wobbles; b therapist puts out hands; c patient is safe). Rather she is behaving as a human who cares for another human and seeks to act in a way which is beneficial to his health. This is complex, context-sensitive, and holistic – all the things that research tries to control for, ignore, or be the opposite of. The source of the clinical evidence is held fully within the space between the two parties, being fed into by the behaviours, thoughts, and experiences of both parties. The actual clinical evidence (‘the available body of facts or information indicating whether a belief or proposition is true or valid’) emerges from this space. It is informed by what each party has experienced before, and could be explained by appealing to such things as laws of nature, professionalised knowledge etc, e.g. the therapist has experienced falling objects which adhere to laws of motion and the idea of gravity. The patient may have experienced falling before and his memory of this prompts him to look fearful, or make sounds and actions which indicate that he is frightened. Between them, an action is developed which seems right. There is plenty of evidence (‘the available body of facts or information indicating whether a belief or proposition is true or valid’) to inform the therapeutic decision. Note that this situation is quite different from a therapist using, say, energy from crystals, to prevent the fall. Biopsychosocially implausible interventions aren’t even in the starting gates.
Now, Rob de Bie called this example “a haphazard reaction to an unusual and emergency situation, and this is not what physiotherapy interventions are”. And I agree, sort of. I was (again) being facetious and obtuse, but being so in order to highlight that there are situations where interventions can be justified by evidence which is not research evidence, and that to accept the motion would mean accepting that in this case – the therapist should not have saved the patient. However, in being obtuse I also wanted to raise the broader point of “well what is the best evidence to inform a clinical decision?”
So what is the best evidence?
Imagine now if there were in-fact some excellent systematic reviews/meta-analyses of high quality RCTs which supported the use of therapists putting their hands up to save patients from falling downstairs. What would now be the best evidence to inform this clinical decision? Would it be the statistical average from the meta-analysis of multiple high quality RCTs, or would it be that a human being was falling towards you? You are, of course, quite obliged to err on the side of the research. However, if you say that the research evidence is the best evidence to inform this clinical decision , then you are making a commitment to the assumption that in this case the facts and information from a distant population which is not this patient are more indicative that the action would be the best action than the clinical evidence emerging from the individual situation.
And that’s fine, but now you have to answer me this: on what grounds can you satisfactorily explain to me the assumption that population data is more informative to an individual clinical situation than the emergent clinical evidence of that situation? And you have to do this without using platitudes, caricatures, or ridicule. Good luck.
How much evidence?
Now here’s a second puzzle. Taking the ground rules of EBM as literal (and that is all we can do, otherwise what should they be taken as?), the level of evidence for therapeutic decisions should come from systematic reviews of multiple RCTs. Single or non-reviewed RCTs won’t cut the chase due to chance of erroneous findings. Now we need to understand the phrase best available evidence from another dimension. By its own rules, EBM would say that normatively the best available evidence for a therapeutic decision is systematic reviews of multiple high quality RCTs. This is not often the case however. So we use less stringent evidence, perhaps a couple of RCTs which have not been systematically reviewed. However, because of the rationale for systematic reviewing, this cannot be evidence of therapeutic effectiveness. The discriminating factor here is the way that population studies establish the notion of causation. Anything below the said level is not indicative of a causal association. So, when we say something like, ‘ok, it’s not the evidence we would hope for, but it’s something at least’ we are not using evidence of causation at all, we are using sources of evidence which are vacuous and as such cannot inform us of a possible predictive link between doing the intervention and achieving an outcome. In other words, the function of research evidence becomes purely rhetorical and nothing at all to do with clinical effectiveness. So, in the vast majority of situations, I ask again, what is the best available evidence (‘the available body of facts or information indicating whether a belief or proposition is true or valid’) to inform that therapeutic decision? Would you rather use research evidence which is vacuous and simply a rhetoric, or clinical evidence emerging from that therapeutic alliance?
This raises a professional issue: if we want physiotherapy to be ‘evidence-based’, what are we counting as evidence? If it is anything below the highest levels, then we are not actually talking about clinical effectiveness. But it might look good to an outsider – at least it’s something. To me this is at best ignorant, and at worst purposefully deceitful.
I won’t go on here about the further problems associated with justifying therapeutic decisions on evidence which in-fact does fulfil the criteria for causation, i.e. the best systematic reviews and such. I’ll leave that to others for now, for example, and, and, and, and, and.
Look, remember this is not about whether research matters or not, it does. It’s now a case of identifying where different evidential sources fit into therapeutic decision making. RCTs and beyond are relevant, but their constraints must be considered. The statistical analysis necessary to ensure high internal validity makes it is essential to appreciate that optimal warrant is given only to the primary hypothesis, and is applicable only to the sample population in the trial. We can still learn something from these data though.
And now where..?
OK, I fear I may have not helped myself in trying to appease the backlash of “research doesn’t matter”. Once again, IT DOES. All I have done is highlight some possible, and some real, challenges with EBP which only after 20 years or so are we beginning to see. Research does matter. Evidence does matter. However, the questions from the evidence-based practitioner should no longer be the ones from the 20th century, e.g. “which interventions are supported by RCTs”. The modern EBPer should ask 21st century questions about evidence such as “which evidence is most likely going to inform the multitude of decisions within this therapeutic interaction?”
The motion “This house believes that in the absence of research evidence an intervention should not be used” was an excellent prompt to revisit some fundamental questions about the relationship between research and clinical practice. We must be clear that interventions should, whatever, be based on evidence, and that is uncontroversial. The rejection of this motion IS NOT a green light for ‘clinical freedom’, basing predictions on past experience alone, heresy, clinical whims, forcing tradition, or maintaining habits.
The challenge we have is still to answer what – learning from the past 23 years – constitutes best available evidence to inform therapeutic decisions? In the 1990s we did ask this, but without sufficient critical analysis and without the great benefit of two decades of trying to implement data from existing methods to clinical practice.
We also need to stand up as a profession and be genuine, honest, and robust. We should not fall into the trap of deceit and rhetoric by claiming to be evidence-based when we don’t even know what that means.
The big question which we haven’t yet asked as a responsible profession is – to quote Michael Loughlin – “what precisely is it that we are buying into?”
Why don’t we lead the way in taking the best of what we know from scientific inquiry so far, and develop ways of generating evidence which actually serve to inform therapeutic decisions?