This was the motion of a debate which took place at the end of the recent PhysioUK2015 Conference in Liverpool. There was a lot of hype about this, and then it happened. I thought it worked well, these things are always a bit of a gamble. So a huge congratulations to Ralph Hammond, Steve Tolan, and Carley King for constructing this session. Here’s a bit about how it worked:
There were two speakers ‘for’ the motion and two ‘against’.‘For the motion’ were:
Professor Sallie Lamb, Nuffield Department of Orthopaedics, Rheumatology & Musculoskeletal Sciences, University of Oxford
Professor Rob de Bie, Maastricht University, Holland.
‘Against the motion’ were:
Professor Michael Loughlin, Professor of Applied Philosophy, MMU
Me.
The debate was successfully chaired by the lovely ex-Chair of the CSP, Helena Johnson. I am glad I was a speaker, and not the chairperson.
Each speaker spoke for about 7 minutes, and then Helena chaired a question and answer session, taking questions from the audience, and via Twitter. This went on for some time. At the end, Helena called for a show of hands ‘for’ and ‘against’. The ‘againsts’ ‘won’.
And then the fun started.
I suspected that if the ‘againsts’ won, there would be a misinterpretation of what was being debated. Sure enough, social media and face-to-face feedback confirmed this. There seemed to be a feeling of “great, the vote went against the motion, and so that means research doesn’t matter”. Let me try and put the record straight. First, re-read the motion. This is not about whether research matters or not, it is about a detailed point of evidence-based practice – could there be situations where it is acceptable within an evidence-based practice framework to provide a therapeutic intervention which isn’t supported by research evidence? Note that this is also not about using interventions which have research evidence demonstrating a lack of effectiveness. It’s just about absence of evidence.
So, the point is one of detail, and one which promotes polarisation, such is the aim of a well-considered debate. The point, as per a couple of Facebook comments before the event, could be considered moot. But I don’t think so. I think the motion forces us to think a little harder about the relation between research evidence and clinical practice, and I think a few important themes emerged from the debate. I’ll now summarise each speaker’s argument, and highlight what I think are some important issues for physiotherapy and how it conducts itself.
FOR 1 Sallie Lamb: RCTs are gold standard ; we do not know what harms and what heals without RCT evidence; therefore all interventions should have at least RCT-level evidence before their integration into clinical practice. To be taken seriously, our profession needs to get its head out of the sand and start to justify use of therapeutic interventions with RCT-level evidence.
AGAINST 1 Me: Some interventions have such large effect sizes and/or are impossible/unethical to trial that they may be used in the absence of research evidence. I used an example of a therapist intervening to stop a patient falling down stairs. This is known as the paradox of effectiveness, and other examples include the Heimlich manoeuvre, anaesthetics, and parachutes. I also argued that unlike medicine where new drugs are developed in pharma labs, physiotherapy interventions are developed on the clinical floor. If this wasn’t allowed to happen, then we will soon run dry of interventions to trial and the end of given that effect sizes eventually converge to zero, the end of our profession will be nigh (for dramatic effect).
FOR 2 Rob de Bie: Re-enforced Sallie’s argument, using more examples of where human observation errors had led us to believe something to be the case, then eventually RCTs show something different. Take home: again, RCTs are the best ways of understanding the level of effectiveness of an intervention.
AGAINST 2 Michael Loughlin: Set out a broad social and professional picture of the history of evidence-based medicine (EBM) and highlighted historical and contemporary challenges. He pointed towards poor and ill-presented arguments used by strict proponents of EBM, who have relied on platitude, caricature & ridicule in enforcing research evidence within clinical practice. This has masked the limitations and failures of EBM, which are currently being highlighted by a renaissance movement, also. Further, he advocated a broadening of the concept of ‘evidence’, to include much more than the outcomes of clinical research. To be truly professional, physiotherapy should question what it is buying into, and as its stands, it doesn’t.
Note that none of the ‘against’ arguments were against the idea that practice should be based on the best evidence. The question is “what is the best available evidence to inform therapeutic decisions?” I was asking this in terms of examples of clinical practice; Michael was asking this in terms of what a wholesale buy-in to a movement which has so far failed to provide satisfactory explanations of its own commitments to evidential sources means. I can’t expand on the arguments of anyone but my own form now on for fear of misrepresenting them, so I’ll just try and explain myself. Note: ‘evidence’ should be thought of as ‘the available body of facts or information indicating whether a belief or proposition is true or valid’. You could spend a few years analysing the vast literature about the definition of evidence, but you will come back to this OED version.
Paradox of effectiveness
I used this example of a therapeutic intervention.
Extreme/silly/obvious/obtuse/whatever, I presented it in this way to highlight an important point of about what is the best available evidence for a therapeutic decision. Note that this argument has often been used for the case against evidence based practice. This presents the paradox as a straw man, and this IS NOT what I was doing, in fact the opposite. Kenny Venere has provided a great overview of the traditional use of the PoE argument – do read this. My example set out to merely illustrate that in this case it seems intuitive to say the therapist did the right thing, despite there being no research evidence to suggest that it was the right thing. So then, on what grounds was it the right thing? The intuition arises because the suspected effect size is so large – it seems like the therapist has prevented the patient from falling down stairs and risking possible significant injury. But we don’t know that, in any systematic way as would be desired by a commitment to research. Of course, we may have seen people fall downstairs in the past, but as EBM is founded on the premise that human observations are bias, we cannot trust that this is actually the case. If we did, the whole business of EBM would collapse.Facetious alert: yes, I am being facetious here, but I’m playing the EBM game. The ‘human bias’ argument is all too often presented to support the integration of systematic research data into clinical decision making. Not only does this play its part in some form of fallacious kettle logic, along with slogans such as “well what’s the alternative”, it is still unclear as to the extent of human biases in real-life reasoning. Most of the data to support the ‘human bias’ argument comes from tests in experimental situations, which have themselves been shown to artificially increase the degree of biases. When white noise is reduced, and content knowledge is increased, human bias error is often insignificant. Reducing noise and increasing knowledge is what physiotherapy education teaches us to do. Anyhoo, back to the story…
So, to say something like the therapist’s actions are based on existing observational evidence which compared what did happen to a situation where it didn’t happen is a non-starter. So if the evidence (‘the available body of facts or information indicating whether a belief or proposition is true or valid’) does not or could not come from an external source, then it must come from an internal source, internal to that therapeutic alliance. This then, is not research evidence.
This internal evidence is an emergent feature of the physical, psychological, and social interactions of two human beings. It is a complex and non-linear process. That is to say that the therapist is not consciously placing a series of discrete events in a temporal order (even though a posteriori analysis could reduce it to such, i.e. a patient wobbles; b therapist puts out hands; c patient is safe). Rather she is behaving as a human who cares for another human and seeks to act in a way which is beneficial to his health. This is complex, context-sensitive, and holistic – all the things that research tries to control for, ignore, or be the opposite of. The source of the clinical evidence is held fully within the space between the two parties, being fed into by the behaviours, thoughts, and experiences of both parties. The actual clinical evidence (‘the available body of facts or information indicating whether a belief or proposition is true or valid’) emerges from this space. It is informed by what each party has experienced before, and could be explained by appealing to such things as laws of nature, professionalised knowledge etc, e.g. the therapist has experienced falling objects which adhere to laws of motion and the idea of gravity. The patient may have experienced falling before and his memory of this prompts him to look fearful, or make sounds and actions which indicate that he is frightened. Between them, an action is developed which seems right. There is plenty of evidence (‘the available body of facts or information indicating whether a belief or proposition is true or valid’) to inform the therapeutic decision. Note that this situation is quite different from a therapist using, say, energy from crystals, to prevent the fall. Biopsychosocially implausible interventions aren’t even in the starting gates.
Now, Rob de Bie called this example “a haphazard reaction to an unusual and emergency situation, and this is not what physiotherapy interventions are”. And I agree, sort of. I was (again) being facetious and obtuse, but being so in order to highlight that there are situations where interventions can be justified by evidence which is not research evidence, and that to accept the motion would mean accepting that in this case – the therapist should not have saved the patient. However, in being obtuse I also wanted to raise the broader point of “well what is the best evidence to inform a clinical decision?”
So what is the best evidence?
Imagine now if there were in-fact some excellent systematic reviews/meta-analyses of high quality RCTs which supported the use of therapists putting their hands up to save patients from falling downstairs. What would now be the best evidence to inform this clinical decision? Would it be the statistical average from the meta-analysis of multiple high quality RCTs, or would it be that a human being was falling towards you? You are, of course, quite obliged to err on the side of the research. However, if you say that the research evidence is the best evidence to inform this clinical decision , then you are making a commitment to the assumption that in this case the facts and information from a distant population which is not this patient are more indicative that the action would be the best action than the clinical evidence emerging from the individual situation.
And that’s fine, but now you have to answer me this: on what grounds can you satisfactorily explain to me the assumption that population data is more informative to an individual clinical situation than the emergent clinical evidence of that situation? And you have to do this without using platitudes, caricatures, or ridicule. Good luck.
How much evidence?
Now here’s a second puzzle. Taking the ground rules of EBM as literal (and that is all we can do, otherwise what should they be taken as?), the level of evidence for therapeutic decisions should come from systematic reviews of multiple RCTs. Single or non-reviewed RCTs won’t cut the chase due to chance of erroneous findings. Now we need to understand the phrase best available evidence from another dimension. By its own rules, EBM would say that normatively the best available evidence for a therapeutic decision is systematic reviews of multiple high quality RCTs. This is not often the case however. So we use less stringent evidence, perhaps a couple of RCTs which have not been systematically reviewed. However, because of the rationale for systematic reviewing, this cannot be evidence of therapeutic effectiveness. The discriminating factor here is the way that population studies establish the notion of causation. Anything below the said level is not indicative of a causal association. So, when we say something like, ‘ok, it’s not the evidence we would hope for, but it’s something at least’ we are not using evidence of causation at all, we are using sources of evidence which are vacuous and as such cannot inform us of a possible predictive link between doing the intervention and achieving an outcome. In other words, the function of research evidence becomes purely rhetorical and nothing at all to do with clinical effectiveness. So, in the vast majority of situations, I ask again, what is the best available evidence (‘the available body of facts or information indicating whether a belief or proposition is true or valid’) to inform that therapeutic decision? Would you rather use research evidence which is vacuous and simply a rhetoric, or clinical evidence emerging from that therapeutic alliance?
This raises a professional issue: if we want physiotherapy to be ‘evidence-based’, what are we counting as evidence? If it is anything below the highest levels, then we are not actually talking about clinical effectiveness. But it might look good to an outsider – at least it’s something. To me this is at best ignorant, and at worst purposefully deceitful.
I won’t go on here about the further problems associated with justifying therapeutic decisions on evidence which in-fact does fulfil the criteria for causation, i.e. the best systematic reviews and such. I’ll leave that to others for now, for example, and, and, and, and, and.
Look, remember this is not about whether research matters or not, it does. It’s now a case of identifying where different evidential sources fit into therapeutic decision making. RCTs and beyond are relevant, but their constraints must be considered. The statistical analysis necessary to ensure high internal validity makes it is essential to appreciate that optimal warrant is given only to the primary hypothesis, and is applicable only to the sample population in the trial. We can still learn something from these data though.
And now where..?
OK, I fear I may have not helped myself in trying to appease the backlash of “research doesn’t matter”. Once again, IT DOES. All I have done is highlight some possible, and some real, challenges with EBP which only after 20 years or so are we beginning to see. Research does matter. Evidence does matter. However, the questions from the evidence-based practitioner should no longer be the ones from the 20th century, e.g. “which interventions are supported by RCTs”. The modern EBPer should ask 21st century questions about evidence such as “which evidence is most likely going to inform the multitude of decisions within this therapeutic interaction?”
The motion “This house believes that in the absence of research evidence an intervention should not be used” was an excellent prompt to revisit some fundamental questions about the relationship between research and clinical practice. We must be clear that interventions should, whatever, be based on evidence, and that is uncontroversial. The rejection of this motion IS NOT a green light for ‘clinical freedom’, basing predictions on past experience alone, heresy, clinical whims, forcing tradition, or maintaining habits.
The challenge we have is still to answer what – learning from the past 23 years – constitutes best available evidence to inform therapeutic decisions? In the 1990s we did ask this, but without sufficient critical analysis and without the great benefit of two decades of trying to implement data from existing methods to clinical practice.
We also need to stand up as a profession and be genuine, honest, and robust. We should not fall into the trap of deceit and rhetoric by claiming to be evidence-based when we don’t even know what that means.
The big question which we haven’t yet asked as a responsible profession is – to quote Michael Loughlin – “what precisely is it that we are buying into?”
Why don’t we lead the way in taking the best of what we know from scientific inquiry so far, and develop ways of generating evidence which actually serve to inform therapeutic decisions?
Hi Roger,
I wasn’t in the UK. I really didn’t follow all the tweets either.
How about this scenario? What if an experienced clinician has been working with patients with a particular problem and applies the evidence but begins to believe there is a better way? What if the experienced clinician just knows that patients deserve better and need someone in the trenches to step up to the plate and put all sorts of experience and indirect research into a different package to see if better results can be generated? Of course, the experienced clinician has to be humble and has to figure out a way to compare the results. Clinicians in the trenches don’t typically have a team to go to for designing a RCT to test out the new approach. I believe it is important to support those clinicians going against the status quo because they really believe there is a better way. Granted, I do need to qualify the support: if the clinicians are being honest and genuine versus creating a tribe while elevating oneself.
I used to send a letter to physicians providing direct research supporting the interventions I would provide for each patient whenever I had supporting research. (call me crazy… it was just an experimental game) I also kept a checklist where I noted whether or not I sent a letter to the referring physician. Would you believe that 40% of the time, I had no perfect research that was a definite “this is what I am strongly basing my interventions on?” 40% of the time I made educated guesses… reassessed… changed course of intervention when the anticipated outcome wasn’t occurring within an anticipated period of time. I don’t know if anyone else has ever kept a log of how he/she makes decisions, but I do know I learned I don’t know it all and I tend to grasp at treatment plans because 40% of the time the patient in front of me didn’t fit into some easy category where it was simple to apply the evidence.
Just something to think about.
And I think you know a bit of me well enough to know, I’m not trying to stir the pot. I’m genuinely curious.
Selena
Dear Roger,
this is a brilliant piece (again!). I am an Italian PT who worked six years full-time as a MSK clinician and now I have also three years of full-time research experience in the epidemiological field. Adopting and living with both perspectives (clinician and researcher) made me wonder for quite some time about these concepts that you were able to put into words in such a nice and logical way.
Some steps of your post are just amazing and really deserves to be repeated and, above all, addressed to EBP producers and users in future conferences and personal meetings. A couple of examples:
A) “On what grounds can you satisfactorily explain to me the assumption that population data is more informative to an individual clinical situation than the emergent clinical evidence of that situation?”
B) “Research does matter. Evidence does matter. However, the questions from the evidence-based practitioner should no longer be the ones from the 20th century, e.g. “which interventions are supported by RCTs”. The modern EBPer should ask 21st century questions about evidence such as “which evidence is most likely going to inform the multitude of decisions within this therapeutic interaction?”
I also take the liberty to present a couple of comments that might be useful (or not…) along your line of thinking. It would be really great to have your feedback on these:
1) In my modest opinion, your point regarding “How much evidence?” is partly valid as the classical definition of evidence (in the EBM framework) of ‘best available evidence’ has already been replaced by ‘high quality evidence’ in the book (bible for the genre?) Pratical Evidence Based Physiotherapy by Herbert and others (2010 version). When I read it 2-3 years ago, I thought this was already a step forward and probably addresses already your issue on this point. Is that correct or am I missing something?
2) More importantly, what I find always missing in this type of discussions regarding evidence being useful (or not) for clinical practice is a differentiation in the points of view (don’t worry, I get the discussion here is about the use of interventions for which there is lack of evidence). As I wrote before, I have worked both as clinician and researcher and, since I made the switch to the academic world, I have never thought that my research should change the way all clinicians work, that all clinicians should do only interventions for which there is high quality research, etc. However, I have always thought that my research made at the population level would have made an influence in the policy making (of health care systems, health insurances, or hospitals) of health interventions that are delivered at the population level. At that level, (always in my modest view) high quality research is probably the only way to go and subjective opinions of policy makers or clinicians cannot replace what is generated on a larger scale. So, I would say that high quality research is fundamental to inform policy-making but, narrowing it down to daily clinical practice, the discussion becomes much more difficult as you present it. Do I have a point here or do you think that also in policy making lack of evidence for certain interventions should be considered if some influential clinicians think these are important?
BTW, regarding daily clinical practice and research, I totally agree with you, we don’t really know what being evidence-based means and we should seriously wonder why no one (after more than 20 years) has already spent time to deeply understand and explain the integration of the three EBP components, and to “develop ways of generating evidence which actually serve to inform therapeutic decisions”.
Nice balanced view of EBP here Roger! When the EBP movement started I think the PT profession took it on board fully and probably took too of a ‘hard line’ view, myself included. The pendulum is swinging back and I think your view points present a nice middle ground between ‘hard line’ EBP and totally expert opinion driven practice. Perhaps you can map out to where the profession should be moving forward in terms of both research and practice? Do you foresee a different type of research paradigm where researchers, clinicians, and patients collaborate to continue solid scientific inquiry related to PT practice? What would this look like?
Bill
Pingback: Actuación sin evidencia ¿sí o no? | Lorena Canosa
Thank your for you discussion. You raise several key points but I especially agree with you that the rejection of the motion should not be used as an excuse to ignore significant objective evidence in preference to our established habits of practice.
This is well illustrated by a series of debates on iCSP regarding Bobath/NDT and a truly excellent blog by Prof Sarah Tyson
(see https://sarahtphysioblog.wordpress.com/2015/10/21/does-bobath-work/ and from there the rest of her series of blogs).
Here the weight of evidence is overwhelming that NDT/Bobath is, as best, suboptimal, yet this treatment approach is still commonly used and stoutly defended. One therapist on iCSP quoted the motion you discussed as a possible defense of Bobath, But in relation to Bobath I think the motion could be reworded to “This house believes that in the presence of significant research evidence that an intervention is ineffective it should be abandoned” Bye Bye Bobath.
Pingback: “This house believes that in the absence of research evidence an intervention should not be used” | Chiroresearcher
Pingback: Kunnskapsbasert praksis – en modell for utvikling, ikke status quo! – Fysio Neuralyzer
Pingback: Kunsten å «ikke gjøre noe» – Fysio Neuralyzer