There is a crack, there is a crack in everything, that’s how the (truth) gets in*: a (non) scientific conversation.

*(Leonard Cohen’ish)

I’m really not one to get drawn into SoMe arguments etc, (no, seriously) but there is something really enticing about a recent discussion on Twitter to which I am inclined to cast some further thoughts. Note that this is not any sort of attack on individuals (of whom I hold utmost respect). Maybe it’s more about the complexities of SoMe, the desire and passion to promote what are thought to be the most important health care messages, and such forth. Maybe.

IT BEGAN WITH a nice, brief editorial published by leading musculoskeletal researchers calling, quite rightly, for better evidence-informed marketing within physiotherapy, specifically regarding low back pain: “Associations should ground their marketing on rock solid research data”. So far, so good. The authors made a suggestion that some physiotherapy interventions were harmful, to quote:

Early access to harmful or ineffective physical therapy treatments (eg, kinesiotape and electrotherapy), irrespective of timing, is unlikely to improve patient outcomes7” (ibid)

Still so far so good. Citation “7” is a (excellent) summary of thought and position on the gap between evidence and practice. However, other than brief comments about risks associated with NSAIDs, opioids, epidural, and spinal fusion, there are no data nor reference to any data about harm of physical therapy interventions in the cited paper.

SO THEN A COUPLE OF COLLEAGUES AND I wrote a brief response asking for clarity on the “harm data” issue. We, of course, agreed entirely with the comments about lack of effectiveness, closing the gap between evidence and practice, etc, but it was the harm statement we were interested in. Basically, we asked if the authors could provide data for claiming that some physical therapy interventions were harmful, or sensibly withdraw that statement. We felt this was important in order to get the best scientific appreciation of the state of physical therapy as possible, and were concerned about how therapists, patients, stakeholders, etc, might interpret this.

THEN TWITTER HAPPENED. This is a summary of the responses to the question “could you please provide data on harm for physical therapy interventions, eg, kinesiotape and electrotherapy”. This is about logic and dialogue, not about individuals, so I am not stating names here. Tweets are in “quotes italics”, followed by *my comments in asterisks*:

“If people are demanding that ‘harm’ be defined; why are they practising? I would have thought this concept is a core element of any health training program”

*A circumstantial ad hominem response. A passive-aggressive comment about a person’s situation with the aim of undermining the person asking the original question*

 

“There has been a recent demand on Twitter to define “harm” in relation to physio interventions. I want to know how to define “benefit” and if research is scrutinised to the same extent when claiming a treatment is beneficial without justification beyond statistical significance”

“Do you mean adverse event rates in PT trials? I attach data for PROMISE (Lancet 2014; 384: 133–4)” (NOTE: these data related to 5/157 adverse events from exercise and advice  groups in a neck pain trial, not harm caused by “physical therapy interventions, eg, kinesiotape and electrotherapy” in people with low back pain)

*Distraction fallacies. Intending to distract the person from the original point. The main argument may thus never be completed to a logical conclusion*

 

“Sorry, but I am lost on what you want. I provided harms data as per the report you referred me to”*Argumentum ad nauseam. Keep avoiding the core question until all parties lose interest*

“Hardly a causal claim. Looks like you’ve misinterpreted what we wrote”

“…but IMHO that was not a causal statement as some have been trying to say”

“No, definitely not, it does NOT say physiotherapy is harmful, agree. It (sort of) says that some physiotherapy interventions may be harmful”

“Those words weren’t in the editorial; they are yours. I thought the editorial was referring to ineffective treatments that have potential for adverse effects”

*Informally denying the antecedent. If the premise is not true, then neither is the conclusion. What bit of “harmful or ineffective physical therapy treatments” is not casual?*

 

Clutching at straws Roger. Save your time

*Avoid the issue, close down the argument and we can never be proved wrong*

Followed by:

“                                                                                                                               ”

*Argumentum ex silentio. Say nothing and parties will assume there is nothing to argue against*

 

BUT FINALLY! An admission that THERE ARE NO DATA ON HARM for the sorts of interventions referred to:

“There is body of literature on this complex issue. e.g. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4284159/ …  Many trials will be biased toward the null. No evidence of harm ≠ safe”

*and on this logic then no evidence of harm harmful*

If harm=stat sig more AEs with PT; then I cannot recall an example as most PT RCTs are under-powered to detect AEs. But the imprecision means you cannot call them safe either. Double-edged sword.

Phew! We got there! But this experience begs a couple of question:

  • Why did a group of leading international scientific researchers and their supporters employ so many logical fallacies and avoidance strategies in response to a clear, simple, black-and-white question about data?
  • Why did a group of leading international scientific researchers make a causal statement in a professional, scientific, peer-reviewed publication about cause and effect that they knew not to be true?

‘But this sentence about harm was such a tiny part of what was written in the original article’ you cry.

Indeed it was my friends. But like a tiny crack in the door, this is how the light gets in. Is this small, seemingly “clutching at straws” statement a light-signal of truth about scientists fabricating statements in order to support a particular agenda perhaps?

crack light

If so, if such people are consciously willing to fabricate a causal statement like this, what else are they willing to fabricate? This is an issue of scientific and professional integrity, creditworthiness, and trustworthiness. These issues are especially exaggerated as the false statement was part of an explicit call for better use of scientific evidence and “rock solid data” in professional dialogue.

Finally, I’m going to fabricate a new fallacy of my own: argumentum ad ironicus©

“I enjoy tough questions on twitter. But there needs to be respect & integrity; & debate needs to be grounded in science. If those are missing I opt out”

*Bye bye*

For your listening pleasure: Leonard Cohen, Anthem

Image credit: cracked door, Alec Squire, Flikr 2013

UPDATE, Jan 2019:

blocked

#RollingEyesEmoji

Advertisement

1 Comment

Filed under Uncategorized

Physio Will Eat Itself

PWEI physio2

I suppose this is about change. We might be doing ourselves irreparable damage you know, as a profession. I just read Dave Nicholls’s Should we give up physiotherapy? , and I just saw Kettlebell Physio Neil Meigh’s Facebookey Livey thingy, and I’ve been concerned about the de-commissioned physiotherapy services in Nottingham  and been interested in the proliferation of “myth busting” initiatives (all good stuff BTW, except for the de-commisioning) and wondered about a tone and trend being witnessed within the profession, perhaps best exemplified by posts like this:

Meakins LBP

Now before Meakins gets all “oi you little barrrstid, that’s right proper fackin’ science that is you old spanker” on me, I’m not saying this is bad/wrong/evil/or anything – in fact the opposite. It’s lovely. Meakins’ post is, as always, a provocative and grounded signal to make all of us sit-up and think a bit more about what we are doing and who we are. There are two dimensions to this post, first something about treatment of LBP (but it could be any are of practice), and second something about the quality, or state, of practice. Both essential areas to continually be reflecting upon in the light of new evidence.

Dave Nicholl’s blog asks us a critical question related to Sir Meakin’s first point: “if it were in the best interests of patients or the healthcare system as a whole, for us to disestablish physiotherapists, would we do it?”

Kettlebell Neil talks about not just the quality of practice, but also the extent of practice – wondering if we are spreading ourselves too thinly in some sort of desperate professional attempt to shift our allegiances and pretend to be things we are not.

These two points are of course related. If the evidence starts to question the utility of what we do at the moment, does it not make sense to become involved in other areas? The move from multi-modal, structure-based interventions for LBP, for example, might be taken as we see more evidence in favour of simple exercise based interventions. Or indeed no intervention at all.

So what is the issue with all this?

[OK, USUAL DISCLAIMER: I am of course wholeheartedly in favour of implementing the best evidence to ensure the best outcomes for people we engage with (what constitutes ‘best evidence’ is, however, a discussion for another day), blah, blah, blah. So before you try and use the “you don’t like change or science” argument, au contraire mon ami. In fact, during a recent office clear out, I happened upon this letter I sent to Frontline in 1999 advocating for change and science (and referring to Bob Dylan):

Frontline letter 1999 3

So there. Get over it].

The issue is, I think, one of professional identity, and what this means for the people we serve. The increasingly active dialogues which happen within the profession (perhaps the SoMe Echo Chamber is the best example, although these dialogues do occur elsewhere) are forever encouraging us to think, challenge our own beliefs, reflect, and progress. And quite rightly so. But whilst this is happening, there is an emerging phenomenon which seems to be getting ignored – how do we look from the outside? I wonder if anyone outside the profession knows who we are or what we do. In fact, I wonder if many from within know these things.  What is our identity? What is our USP? Who even are we?

Here’s one part of the problem. Our (possibly ever-so-slightly overly) enthusiastic attitudes to “new evidence”, which soon turn into vitriol:

“. . . you are such a loser for still doing foam-massage-mobilisation-needle-specific-electro-release techniques when this n=17 uncontrolled trial in Anals of a Physiotherapist’s P-value says it may or may not be better than tele-instructed stair-walking whilst reading a pamphlet about pain and pretending not to be frightened and I stopped doing what you’re doing like yesterday and now I’m better than you because I watched a well-produced slightly humorous video of a popular and charismatic physio god probably from Australia or America or somewhere like that with 11,000 Twitter followers who doesn’t do any grade III+ mobilisations anymore but uses metaphors instead and who I think represents the very best of science although I haven’t actually read it myself but he/she’s got a tan and a nice smile and is from abroad so it must be true and anyway I’ve read a pop-sci/psych/phil book so I know what a fallacy is and you do all of them and now I actually work as a Band 17b Advanced-Extended-Diagnostic-Rehab-Strengthener-Specialist-Wish-I’d-Been-a-Doctor-Consultant and I can do injections and prescribe Haliborange and do advanced metaphorical squats and minor surgery on gerbils so there you creepy practice-based-on-your-own-experience bastard fuckwit of a no-hoper.” (Author’s personal communication)

In goes without saying that both parties here are right and wrong in roughly equal measures. However, this vitriol soon turns into grand claims about what we should or shouldn’t be doing as a profession, and whilst all this is happening our public and our employers and our commissioners peek cautiously through the gap in the door and think “who the twattin’ ‘ell are this bunch of jokers?”.

Of course evidence and debate and progression are critical to the functioning of a profession. But this should be done with not only an understanding of the data, but also with a meaningful understanding of the sociocultural context in which such data are intended to be used. Humans and societies do not change quickly, even in the presence of over-whelming evidence that they should. Whilst we try and initiate rapid change in what we do, we risk alienating those we serve. If people are unsure who we are, they won’t sit around with their pain and their money and their contracts waiting for us to come to a consensus on whether we should do a squat with or without our bellies pulled in. No, they will seek the care and services of those they recognise and have confidence in. As they drift away, they leave us bickering and squabbling between ourselves, slavering with excitement at a few more Facebook likes, eating ourselves until we are no more.

Then that’s it. The end. Finito.

Be careful what you wish for peeps. Stay focused. Bridges not walls eh.

For a more considered and erudite argument related to this matter, I have recently had the joyous experience of writing a chapter with the incredible Fiona Moffatt for the remarkable Dave Nicholls’ astonishing CPN’s startling forthcoming anthology on Physiotherapy. Watch that space. But while you’re waiting.

[Banner logo adapted from © Pop Will Eat Itself (http://www.popwilleatitself.net/pwei/)%5D

21 Comments

Filed under Uncategorized

Medicine, Philosophy & Hair Gel

So this week’s been fun. First tweet seen the other day was a link to CPN’s Dave Nicholls’ blog about the Biopsychosocial (BPS) model. As ever, Dave provided a short, focused, erudite commentary on the use and possible limitations of a currently favoured healthcare model. This adds to and reflects a large literature on the subject and provokes thought towards where we might go from here. One day. Perhaps. I thought it was quite nice.

However, all was not straight forward. A quick retweet ended up with a barrage of seemingly anti-critique-of-the-BPS-model come-backs. All in good spirit, I’m sure. But I do think a couple of things have arisen from this Twitter dialogue. Let me summarise the main themes of the responses. I think one issue is the direct critique of the model. Seems folks don’t like this. The second is the place of philosophical discussion within medicine and healthcare. Seems folks don’t like this either. What struck me most was that many comments were being fired from folks who I consider to be among the most progressive and critical thinking practitioners I know (and I love them all the more following these recent exchanges). For me though, this meant some personal reflection on my position. Was I missing something? So I’ve been pondering on the discussions and here’s what I think.

First,  the reaction to critique of the BPS model. Here are a couple of responses as examples:

blog1

Now I do get that we might want solutions and not just challenges. But typically solutions come after something solvable is exposed, which is usually done by challenging an issue. It would be unusual to have a solution to a problem which hasn’t yet been identified. So to expect answers to these sorts of (valid) questions is too some degree overly ambitious. The development of the BPS model came a long way after exposing the limitations of the biomedical model, and it was only through realising these limitations that thought towards solutions could be presented. There is also a suggestion here that we have some sort of complete understanding of the human condition and we are at the end of our scientific journey. This is sort of missing the point of science. So I do apologise King Tom and Queen Jack, but can I get back to you when science and critical analysis, as per the history of forever, have had a bit more time please? Thanks. Smiley face.

The next concern was about over-philosophising in medicine and the health sciences.  Here’s an excellent quip from Sir Jason of America along these lines:

blog2

. . . aaaaand again:

blog3

Guys, once again, I see your point. We are practical people, we want practical answers. We don’t want to waste our precious time – nor that of the humans we work with (see how I didn’t say ‘patients’) – trying to plough through unpenetratable theory, especially if the practical relevance isn’t obvious.

And that’s the point.

With abstract critical analysis and complex thinking, the relevance often isn’t obvious. That doesn’t mean there is no practical relevance though. The relevance may come with time and further analysis. Who would have thought that in 1748 some abstract philosophy on regularly occurring events and counterfactuals would lead to the undertaking of the first medical randomised controlled trial some 204 years and tomes of abstract philosophy on the same subject later? Who would have gambled on abstract thought on complex and imaginary numbers in the 16th century would provide the key to developing alternating current just a short 200 years later? There is an extreme sociopolitical worry when movements are made to police thought in general. But in science and technology in particular, the consequences of curtailing thought based on ‘I can’t see any immediate practical application’ could be dire for the progress of the world, and in our case global health.

It’s totally fine to have a favoured model. But it won’t last forever. Nowt ever does. Except for hair gel. Hair gel lasts forever.

hair-gel2

Peace, love and understanding.

 

1 Comment

Filed under Uncategorized

Communicating Risk: Part 2

This is the second abstract from my chapter on communicating risk in Grieve’s Modern Musculoskeletal Therapy 2015 (Vol 4) 

Part 1 introduced the idea of and challenges about understating and communicating risk. Part 2 now focuses on relative v absolute risk, probabilities v natural frequencies, and communication tools.

risk

Relative versus absolute risk

Misinterpretations of absolute and relative risk contribute to data users’ anxieties and misunderstandings (Mason et al 2008).  Absolute risk (AR) can be the prevalence (or incidence), or indicate the absolute difference in risk between two groups. Relative risks (RR) – and their counterparts, odds ratios (OR) – are products of the division of AR in each group, to form a relative difference. RRs may help to make comparative judgments, e.g. “this is riskier than that”. This way of communicating is encouraged in evidence-based medicine. However, RRs are more persuasive and make differences in risk appear larger than they are (Gigerenzer et al 2007). They are over-reported in lay-press and research reports where authors want to exaggerate differences (Gigerenzer et al 2010;).

‘‘If the absolute risk is low, even if the relative risk is significantly increased to exposed individuals, the actual risk to exposed individuals will still be very low” (Gordis 2009)

A related statistic to absolute risk is number needed to harm (NNH). NNH are the inverse of the absolute risk difference. Although NNH might seem to hold informative content (Sainani 2012), a recent Cochrane review concluded that this was poorly understood by patients and clinicians (Akl et al 2011).  In summary both RR (including ORs) and NNH are poor means of communicating risk, and AR should be favoured (Fagerlin 2011, Ahmed et al 2012). Table 1 shows examples of how the same risk can be represented in these three different perspectives.

table-1-actual
Table 1 Communicating the numerical risk of stroke following manipulation
: Although precise figures are difficult to obtain, the best existing estimates of rates of events to calculate risk are used. To represent a ‘worse-case’ scenario, we can use one of the highest estimates of rate of stroke following manipulation (6:100,000,  Theil et al  2007), and a conservative assumption about risk in a non-manipulation group, say 1:100,000 (e.g. Boyle et al 2009)

 

Probabilities v Natural frequencies

So far we have considered risk expressed as some sort of probability. Alternatively, natural frequencies (NF) can be a clearer way of representing risk (Akl et al 2011, Gigerenzer 2011). NFs are joint occurrences of two events e.g. positive result on a clinical test and the presence of a condition. In terms of risk prediction, we may be familiar with probabilistic ideas of specificity, sensitivity, positive predictive value, etc. Although commonly used (for example, these form the core of clinical predication rules), these statistics are a consistent source of confusion and error (Eddy 1982, Cahan et al 2003, Ghosh et al 2004).  Reports have suggested that the human mind might be better evolved to understand risk in terms of NFs (Gigerenzer and Huffage 1996; Cosmides and Tooby 1996).  NFs are absolute frequencies arising from observed data. Risk representation using NFs avoids the complex statistics of probability expression, whilst maintaining the mathematical rigor and Bayesian logic necessary to calculate risk. Table 2 compares probabilistic statistics and NFs for adverse event prediction.

table-2-actualTable 2 Comparison of risk interpretation between using conditional probabilities and natural frequencies for the ability of a functional positional test with high specificity to detect presence of vertebrobasilar insufficiency (VBI)

(1based on data presented in Hutting et al 2013 (only probability estimates of positive test results have been included for clarity). 2based on prevalence of VBI in a normal population of 1:100,000 (Boyle et al 2009); median VBI Test specificity of 84% (from Hutting’s range of 67% – 100%), indicating a false positive rate of 16%.)

The high figures of the conditional probabilities suggest that the VBI test could be useful in detecting presence of disease. The result of the NF calculations in fact show that in a more intuitively natural context, the chance of having the disease following a positive test is still extremely low (0.006%). This is a common fallacy associated with interpretation from probability statements (stemming from Barre-Hill 1980). Important to note is that both methods are mathematically valid. It is only the perception of risk which has changed.

 

Communication tools

Stacey et al (2011) found that use of decision aids can improve patients’ knowledge and perception of risk, and improve shared decision making. Such aids include visual representations of risk, and these have many desirable properties, e.g. reveal otherwise undetected data patterns; attract attention; evoke specific mathematical operations (Lipkus and Hollands 1999). Specific types of aids are useful for specific types of risk, e.g. bar charts for group comparisons; line graphs for temporal interactions among risk factors; pie-charts for showing risk proportions etc (Lipkus 2007).  Icon arrays are also used to display population proportions, and rare events can be demonstrated in magnified or circular images. Figures 1 and 2 shows examples of graphical images used for communicating common and rare events.

fig-1-actual

 

Figure 1 Two ways of representing risk of minor adverse events following manipulation. Data from Carlesso et al, (2010): pooled relative risk (RR) from meta-analysis, RR = 1.96, or 194 events per 1000 with manipulation versus 99 per 1000 with no manipulation (control).  A) icon array pictorially representing absolute risk; B) bar-graph demonstrating difference between the two groups.

 

fig-2-actual

Figure 2 Representing rare risk events.

  1. A) A circle diagram representing the absolute risk of serious adverse event following manipulation. The blue circle represents 100,000 units, and the red dots represent the number of cases per 100,000.
  2. B) From prevalence data on vertebrosbasilar insufficiency (VBI) (Boyle et al 2009) and diagnostic utility of a VBI test (Hutting et al 2013), this graph shows a population of 100,000 (the large blue circle), the proportion who test positive on a VBI test (16,000: the yellow circle), and the proportion of people who will actually have VBI (1: the red dot)

 

Framing risk

The way risk is framed is considered important for effective communication (Edwards et al 2001). Framing presents logically equivalent information in different ways. Generally, risks can be framed positively (gain-framed) or negatively (loss-framed).  We might gain-frame the risk of stroke following manual therapy as “you are very unlikely to experience stroke following this intervention”, or loss-frame it as “this treatment could cause you to have a stroke”.  Gain-framing can be more effective if the aim is preventative behaviour with an outcome of some certainty (Fagerlin and Peters 2011) e.g. “exercising more will reduce cardio-vascular risk” would be more effective than “if you don’t exercise, you will have an increased risk of cardio-vascular disease”.  However, loss-framing is generally more effective, and especially so when concerned with uncertain risks (Edwards et al 2001).

Personalising risk

Edwards and Elwyn (2000) reported that risk estimates based on personal risk factors were most effective in improving patient outcomes. A subsequent Cochrane review reported that compared to generalised numerical risk communication, personalised risk communication improved knowledge, perception and uptake of risk-reducing interventions (Edwards et al 2006). Personalised risk may include attempts to identify a smaller sub-group akin to the individual patient, and/or consideration of the individuals own risk factors for an event. This dimension of risk communication contextualises population data estimates within single patients’ risk factors, together with their values and world-view. Box 1 highlights the operationalization of personalising risk.

box-1-actual
Box 1: Key messages in communicating risk

 

Thank you for reading, and stay risky!

References

Ahmed H, Naik G, Willoughby H, Edwards AGK 2012 Communicating risk. British Medical Journal 344:e3996

Akl EA, Oxman AD, Herrin J, et al 2011 Using alternative statistical formats for presenting risks and risk reductions. Cochrane Database Systematic Reviews 3:CD006776.

Barre-Hill M 1980 The base-rate fallacy in probability judgments Acta Psychologica 44(3): 211–233

Boyle E, Côte P, Grier AR 2009 Examining vertebrobasilar artery stroke in two Canadian provinces. Journal of Manipulative and Physiological Therapeutics 32:S194-200.

Cahan A, Gilon D, Manor O 2003 Probabilistic reasoning and clinical decision-making: do doctors overestimate diagnostic probabilities? QJM 96:763–9.

Carlesso LC, Gross AR, Santaguida PL, Burnie S, Voth S, Sadi J. 2010 Adverse events associated with the use of cervical manipulation and mobilization  for the treatment of neck pain in adults: a systematic review. Manual Therapy 15(5):434-44. doi: 10.1016/j.math.2010.02.006.

Cosmides L, Tooby J 1996 Are humans good intuitive statisticians after all? Rethinking some conclusions from the literature on judgement under uncertainty. Cognition. 58(1):1-73

Eddy DM 1982 Probabilistic reasoning in clinical medicine: problems and opportunities. In: Kahneman D, Sloviv, Tversky A (eds) Judgement under uncertainty: Heuristics and Biases. Cambridge University Press, Cambridge UK, p249–67

Edwards, A, Elwyn G, Covey J et al. 2001. “Presenting risk information–A review of the effects of ‘framing’ and other manipulations on patient outcomes.” Journal of Health Communication 6(1): 61-82.

Edwards AG, Evans R, Dundon J 2006  Personalised risk communication for informed decision making about taking screening tests. Cochrane Database of Systematic Reviews 4:CD001865.

Fagerlin A, Zikmund-Fisher BJ, Ubel PA 2011 Helping patients decide: ten steps to better risk communication. Journal of the  National Cancer Institute 103:1436-43.

Fagerlin A, Peters E 2011 Quantitative Information. In: Fischhoff B, BrewerNT, Downs JS (eds) Communicating risks and benefits: an evidence-based user’s guide. Silver Spring, MD: US Department of Health and Human Services, Food and Drug Administration p 53–64.

Ghosh AK, Ghosh K, Erwin PJ 2004 Do medical students and physicians understand probability? QJM.;97:53-55.

Gigerenzer G, Huffage U 1996 How to improve Bayesian reasoning without instruction: frequency formats. Psychological Reviews. 102:684-704

Gigerenzer G, Gaissmaier W, Kurz-Milcke E 2007 Helping doctors and patients make sense of health statistics. Psychological Science in the Public Interest 8:53-96.

Gigerenzer G, Wegworth O, Feufel M 2010 Misleading communication of risk: Editors should enforce transparent reporting in abstracts. British Medical Journal 341:791-792

Gordis L 2009 Epidemiology. Saunders, Philadelphia, p 102

Hutting NVerhagen APVijverman V et al 2013 Diagnostic accuracy of premanipulative vertebrobasilar insufficiency tests: a systematic review. Manual Therapy 18(3):177-82

Lipkus IM, Hollands J1 999 The visual communication of risk.Journal of the National Cancer Institute 25: 149-63

Mason D, Prevost AT, Sutton S 2008 Perceptions of absolute versus relative differences between personal and comparison health risk. Health Psychology 7(1):87-92.

Politi MC, Han PK, Col NF. 2007 Communicating the uncertainty of harms and benefits of medical interventions. Medical Decision Making 27:681-95

Sainani KL 2012 Communicating risks clearly: absolute risk and numbers needed to treat.  American Academy of Physical Medicine and Rehabilitation. 4:220-222

Speigelhalter DJ 2008 Understanding uncertainty. Annals of Family Medicine 6(3): 196-197.

Stacey D, Bennett CL, Barry MJ 2011 et al. Decision aids for people facing health treatment or screening decisions. Cochrane Database of Systematic Reviews 1:CD001431.

Thiel HW, Bolton JE, Docherty S et al 2007  Safety of chiropractic manipulation of the cervical spine: a prospective national survey. Spine 32(21):2375-2378

Buy Grieve’s Modern Musculoskeletal Therapy here.

Leave a comment

Filed under Uncategorized

Communicating Risk: Part 1

Here is an abstract from my chapter on communicating risk in Grieve’s Modern Musculoskeletal Therapy 2015 (Vol 4) 

 

Risk is the probability an event will give rise to harm (Edwards et al 2001). As healthcare professionals, communicating risk is central to patient, peer, and public interactions. Manual therapy doesn’t hold the severity of risks other professions do, e.g. medicine – we rarely consider death as a risk, although there are situations where this might be the case.  Less severe risks might, for example, be transient unwanted responses to treatment. Nevertheless, we have a responsibility to consider and communicate risk as best we can to make the best clinical decision. This section summarises evidence and thought on the best ways to communicate risk to optimise shared decision-making.

Although communicating risk might seem straightforward, evidence reveals complexity, contradiction, and ambiguity. Further, we should accept that human-beings, and particularly health care professionals, are not good at understanding risk, let alone communicating it (Gigerenzer 2002).  Risk communication has become increasingly important with publication of data and evidence-based practice. In contrast to traditional ‘gut-feelings’ about risk, it is becoming possible to make data-informed judgements. Despite this numerical dimension, there is still uncertainty in understanding and communicating risk.  Paradoxically, communicating uncertain risk judgements using numerical ranges can worsen understanding, credibility, and perceptions of risk (Longman et al 2012).  This section now focuses on understanding risk; communication tools; and framing risk.

 

Understanding risk

Healthcare professionals are poor at understanding numbers (Ahmed et al 2012, Gigerenzer 2002). Gigerenzer  et al  (2007) reported only 25% of subjects correctly identified 1 in 1000 as being the same as 0.1% , coining the phrase ‘collective statistical illiteracy’ in relation to health statistics users.  Education and numeracy levels have little impact on risk judgement or understanding (Lipkus et al 2001, Gigerenzer and Galesic 2012).  Consensus on the best ways for health professionals to communicate risk is lacking (Ghosh and Ghosh 2005).  These facts create barriers to communication, and can lead to aberrant use of research-generated data (Moyer 2012).  Numerical interpretations of probability are necessary yet insufficient conditions of clinicians’ understanding of risk. Risk communication should be inclusive of the numerical probability of an unwanted event happening, together with the effect of this on a patient; importance of the effect; and the context in which the risk might occur (Edwards 2009).

“every representation of risk carries its own connotations and biases that may vary according to the individual’s perspective concerning the way the world works” (Speigelhalter 2008)

risk-bayes

Understanding probabilities

What does 5% mean? Is this the same as 0.05?  Does 5 out of 100 mean the same thing as 50 out of 1000?  Do the odds of 1:20-for say the same as 19:1-against? These are all mathematically valid expressions of the same data relating to probability judgment, but can and do mean different things. But what actually is a 5% risk? If I said you had a 5% chance of increased pain following intervention X, how do you interpret that? Does this mean you might be one of the 5 out of 100 people who’ll experience pain (in which case your probability would actually be 20%)? or that in every 100 patients I treat, 5 experience pain? Does it mean if you had 100 treatments, you’d experience pain 5 times? Does it mean that in 5% of the time, people experience pain? Or that 5 out of every 100 manual therapists induce pain to all their patients?  Is this 5% epistemological – i.e. it is already decided that you’ll have pain, but you just don’t know it yet, to the degree of 5%; or is it aleatory – i.e. a completely random notion to the degree of 5% that you will or won’t experience pain? Such variables should be considered when communicating risk.

The first stage in effective communication is establishing the reference class to which the probability relates e.g. time; location; person. In using population data for risk communication, most of the time the reference class will be historical. i.e. data from past events are used to inform the chance of the next event. Embedding a new individual event in data from a past population should carry some additional judgment, as new informative knowledge may be ignored. Spiegelhalter ’s report of pre-Obama odds on a black US president is a good example: 43/43 past US Presidents were white, indicating a statistical prediction of almost certainty of a 44th white President (Spiegelhalter 2008).

 

Part 2 will look at the relative v absolute risk; probabilities v natural frequencies; and framing of risk.

References

Ahmed H, Naik G, Willoughby H, Edwards AGK 2012 Communicating risk. British Medical Journal 344:e3996

Edwards, A, Elwyn G, Covey J et al. 2001. “Presenting risk information–A review of the effects of ‘framing’ and other manipulations on patient outcomes.” Journal of Health Communication 6(1): 61-82.

Ghosh AK, Ghosh K 2005 Translating evidence based information into effective risk communication: current challenges and opportunities. Journal of Laboratory and Clinical Medicine 145(4):171–180.

Gigerenzer G 2002 How innumeracy can be exploited. In: Reckoning with risk—learning tolive with uncertainty. 1st ed. Penguin Press, p 201-10.

Gigerenzer G, Gaissmaier W, Kurz-Milcke E 2007 Helping doctors and patients make sense of health statistics. Psychological Science in the Public Interest 8:53-96.

Gigerenzer G, Galesic M 2012 Why do single event probabilities confuse patients. British Medical Journal 344:e245

Lipkus IM, Samsa G, Rimmer BK 2001 General performance on a numeracy scale amonghighly educated samples. Medical Decision Making 21:37-44.

Longman T, Turner RM, King M et al 2012 The effects of communicating uncertainty in quantities health risk estimates. Patient Education and Counselling 89: 252-259

Moyer VA 2012 What we don’t know can hurt our patients: physician innumeracy and overuse of screening tests. Annals of Internal Medicine 156:392-393.

Speigelhalter DJ 2008 Understanding uncertainty. Annals of Family Medicine 6(3): 196-197.

Buy Grieve’s Modern Musculoskeletal Therapy here.

Leave a comment

Filed under Uncategorized

“This house believes that in the absence of research evidence an intervention should not be used”

This was the motion of a debate which took place at the end of the recent PhysioUK2015 Conference in Liverpool.  There was a lot of hype about this, and then it happened. I thought it worked well, these things are always a bit of a gamble. So a huge congratulations to Ralph Hammond, Steve Tolan, and Carley King for constructing this session. Here’s a bit about how it worked:

The debate panel

The debate panel

There were two speakers ‘for’ the motion and two ‘against’.

‘For the motion’ were:

Professor Sallie Lamb, Nuffield Department of Orthopaedics, Rheumatology & Musculoskeletal Sciences, University of Oxford

Professor Rob de Bie,  Maastricht University, Holland.

‘Against the motion’ were:

Professor Michael Loughlin, Professor of Applied Philosophy, MMU

Me.

The debate was successfully chaired by the lovely ex-Chair of the CSP, Helena Johnson. I am glad I was a speaker, and not the chairperson.

Each speaker spoke for about 7 minutes, and then Helena chaired a question and answer session, taking questions from the audience, and via Twitter. This went on for some time. At the end, Helena called for a show of hands ‘for’ and ‘against’. The ‘againsts’ ‘won’.

And then the fun started.

I suspected that if the ‘againsts’ won, there would be a misinterpretation of what was being debated. Sure enough, social media and face-to-face feedback confirmed this. There seemed to be a feeling of “great, the vote went against the motion, and so that means research doesn’t matter”. Let me try and put the record straight. First, re-read the motion. This is not about whether research matters or not, it is about a detailed point of evidence-based practice – could there be situations where it is acceptable within an evidence-based practice framework to provide a therapeutic intervention which isn’t supported by research evidence? Note that this is also not about using interventions which have research evidence demonstrating a lack of effectiveness. It’s just about absence of evidence.

So, the point is one of detail, and one which promotes polarisation, such is the aim of a well-considered debate. The point, as per a couple of Facebook comments before the event, could be considered moot.  But I don’t think so. I think the motion forces us to think a little harder about the relation between research evidence and clinical practice, and I think a few important themes emerged from the debate.  I’ll now summarise each speaker’s argument, and highlight what I think are some important issues for physiotherapy and how it conducts itself.

FOR 1 Sallie Lamb: RCTs are gold standard ; we do not know what harms and what heals without RCT evidence; therefore all interventions should have at least RCT-level evidence before their integration into clinical practice. To be taken seriously, our profession needs to get its head out of the sand and start to justify use of therapeutic interventions with RCT-level evidence.

AGAINST 1 Me: Some interventions have such large effect sizes and/or are impossible/unethical to trial that they may be used in the absence of research evidence. I used an example of a therapist intervening to stop a patient falling down stairs. This is known as the paradox of effectiveness, and other examples include the Heimlich manoeuvre, anaesthetics, and parachutes. I also argued that unlike medicine where new drugs are developed in pharma labs, physiotherapy interventions are developed on the clinical floor. If this wasn’t allowed to happen, then we will soon run dry of interventions to trial and the end of given that effect sizes eventually converge to zero, the end of our profession will be nigh (for dramatic effect).

FOR 2 Rob de Bie: Re-enforced Sallie’s argument, using more examples of where human observation errors had led us to believe something to be the case, then eventually RCTs show something different. Take home: again, RCTs are the best ways of understanding the level of effectiveness of an intervention.

AGAINST 2 Michael Loughlin: Set out a broad social and professional picture of the history of evidence-based medicine (EBM) and highlighted historical and contemporary challenges. He pointed towards poor and ill-presented arguments used by strict proponents of EBM, who have relied on platitude, caricature & ridicule in enforcing research evidence within clinical practice. This has masked the limitations and failures of EBM, which are currently being highlighted by a renaissance movement, also. Further, he advocated a broadening of the concept of ‘evidence’, to include much more than the outcomes of clinical research.  To be truly professional, physiotherapy should question what it is buying into, and as its stands, it doesn’t.

Note that none of the ‘against’ arguments were against the idea that practice should be based on the best evidence. The question is “what is the best available evidence to inform therapeutic decisions?” I was asking this in terms of examples of clinical practice; Michael was asking this in terms of what a wholesale buy-in to a movement which has so far failed to provide satisfactory explanations of its own commitments to evidential sources means. I can’t expand on the arguments of anyone but my own form now on for fear of misrepresenting them, so I’ll just try and explain myself. Note: ‘evidence’ should be thought of as ‘the available body of facts or information indicating whether a belief or proposition is true or valid’. You could spend a few years analysing the vast literature about the definition of evidence, but you will come back to this OED version.

Paradox of effectiveness

I used this example of a therapeutic intervention.

Therapist preventing patient from falling - a paradox of effectiveness

Therapist preventing patient from falling – a paradox of effectiveness

Extreme/silly/obvious/obtuse/whatever, I presented it in this way to highlight an important point of about what is the best available evidence for a therapeutic decision. Note that this argument has often been used for the case against evidence based practice. This presents the paradox as a straw man, and this IS NOT what I was doing, in fact the opposite. Kenny Venere has provided a great overview of the traditional use of the PoE argument – do read this. My example set out to merely illustrate that in this case it seems intuitive to say the therapist did the right thing, despite there being no research evidence to suggest that it was the right thing.

there ain't no research evidence for this

there ain’t no research evidence for this

So then, on what grounds was it the right thing? The intuition arises because the suspected effect size is so large – it seems like the therapist has prevented the patient from falling down stairs and risking possible significant injury. But we don’t know that, in any systematic way as would be desired by a commitment to research. Of course, we may have seen people fall downstairs in the past, but as EBM is founded on the premise that human observations are bias, we cannot trust that this is actually the case. If we did, the whole business of EBM would collapse.

Facetious alert: yes, I am being facetious here, but I’m playing the EBM game. The ‘human bias’ argument is all too often presented to support the integration of systematic research data into clinical decision making. Not only does this play its part in some form of fallacious kettle logic, along with slogans such as “well what’s the alternative”, it is still unclear as to the extent of human biases in real-life reasoning. Most of the data to support the ‘human bias’ argument comes from tests in experimental situations, which have themselves been shown to artificially increase the degree of biases. When white noise is reduced, and content knowledge is increased, human bias error is often insignificant. Reducing noise and increasing knowledge is what physiotherapy education teaches us to do. Anyhoo, back to the story…

So, to say something like the therapist’s actions are based on existing observational evidence which compared what did happen to a situation where it didn’t happen is a non-starter. So if the evidence (‘the available body of facts or information indicating whether a belief or proposition is true or valid’) does not or could not come from an external source, then it must come from an internal source, internal to that therapeutic alliance. This then, is not research evidence.

This internal evidence is an emergent feature of the physical, psychological, and social interactions of two human beings. It is a complex and non-linear process.  That is to say that the therapist is not consciously placing a series of discrete events in a temporal order (even though a posteriori analysis could reduce it to such, i.e.  a patient wobbles; b therapist puts out hands; c patient is safe). Rather she is behaving as a human who cares for another human and seeks to act in a way which is beneficial to his health. This is complex, context-sensitive, and holistic – all the things that research tries to control for, ignore, or be the opposite of. The source of the clinical evidence is held fully within the space between the two parties, being fed into by the behaviours, thoughts, and experiences of both parties. The actual clinical evidence (‘the available body of facts or information indicating whether a belief or proposition is true or valid’) emerges from this space. It is informed by what each party has experienced before, and could be explained by appealing to such things as laws of nature, professionalised knowledge etc, e.g. the therapist has experienced falling objects which adhere to laws of motion and the idea of gravity. The patient may have experienced falling before and his memory of this prompts him to look fearful, or make sounds and actions which indicate that he is frightened. Between them, an action is developed which seems right. There is plenty of evidence (‘the available body of facts or information indicating whether a belief or proposition is true or valid’) to inform the therapeutic decision. Note that this situation is quite different from a therapist using, say, energy from crystals, to prevent the fall. Biopsychosocially implausible interventions aren’t even in the starting gates.

Now, Rob de Bie called this example “a haphazard reaction to an unusual and emergency situation, and this is not what physiotherapy interventions are”. And I agree, sort of. I was (again) being facetious and obtuse, but being so in order to highlight that there are situations where interventions can be justified by evidence which is not research evidence, and that to accept the motion would mean accepting that in this case – the therapist should not have saved the patient. However, in being obtuse I also wanted to raise the broader point of “well what is the best evidence to inform a clinical decision?”

So what is the best evidence?

Imagine now if there were in-fact some excellent systematic reviews/meta-analyses of high quality RCTs which supported the use of therapists putting their hands up to save patients from falling downstairs. What would now be the best evidence to inform this clinical decision? Would it be the statistical average from the meta-analysis of multiple high quality RCTs, or would it be that a human being was falling towards you? You are, of course, quite obliged to err on the side of the research. However, if you say that the research evidence is the best evidence to inform this clinical decision , then you are making a commitment to the assumption that in this case the facts and information from a distant population which is not this patient are more indicative that the action would be the best action than the clinical evidence emerging from the individual situation.

And that’s fine, but now you have to answer me this: on what grounds can you satisfactorily explain to me the assumption that population data is more informative to an individual clinical situation than the emergent clinical evidence of that situation? And you have to do this without using platitudes, caricatures, or ridicule. Good luck.

How much evidence?

Now here’s a second puzzle. Taking the ground rules of EBM as literal (and that is all we can do, otherwise what should they be taken as?), the level of evidence for therapeutic decisions should come from systematic reviews of multiple RCTs. Single or non-reviewed RCTs won’t cut the chase due to chance of erroneous findings. Now we need to understand the phrase best available evidence from another dimension.  By its own rules, EBM would say that normatively the best available evidence for a therapeutic decision is systematic reviews of multiple high quality RCTs. This is not often the case however. So we use less stringent evidence, perhaps a couple of RCTs which have not been systematically reviewed. However, because of the rationale for systematic reviewing, this cannot be evidence of therapeutic effectiveness. The discriminating factor here is the way that population studies establish the notion of causation. Anything below the said level is not indicative of a causal association. So, when we say something like, ‘ok, it’s not the evidence we would hope for, but it’s something at least’ we are not using evidence of causation at all, we are using sources of evidence which are vacuous and as such cannot inform us of a possible predictive link between doing the intervention and achieving an outcome. In other words, the function of research evidence becomes purely rhetorical and nothing at all to do with clinical effectiveness. So, in the vast majority of situations, I ask again, what is the best available evidence (‘the available body of facts or information indicating whether a belief or proposition is true or valid’) to inform that therapeutic decision? Would you rather use research evidence which is vacuous and simply a rhetoric, or clinical evidence emerging from that therapeutic alliance?

This raises a professional issue: if we want physiotherapy to be ‘evidence-based’, what are we counting as evidence? If it is anything below the highest levels, then we are not actually talking about clinical effectiveness. But it might look good to an outsider – at least it’s something. To me this is at best ignorant, and at worst purposefully deceitful.

I won’t go on here about the further problems associated with justifying therapeutic decisions on evidence which in-fact does fulfil the criteria for causation, i.e. the best systematic reviews and such. I’ll leave that to others for now, for example, and, and, and, and, and.

Look, remember this is not about whether research matters or not, it does. It’s now a case of identifying where different evidential sources fit into therapeutic decision making. RCTs and beyond are relevant, but their constraints must be considered. The statistical analysis necessary to ensure high internal validity makes it is essential to appreciate that optimal warrant is given only to the primary hypothesis, and is applicable only to the sample population in the trial. We can still learn something from these data though.

And now where..?

OK, I fear I may have not helped myself in trying to appease the backlash of “research doesn’t matter”. Once again, IT DOES. All I have done is highlight some possible, and some real, challenges with EBP which only after 20 years or so are we beginning to see. Research does matter. Evidence does matter. However, the questions from the evidence-based practitioner should no longer be the ones from the 20th century, e.g. “which interventions are supported by RCTs”. The modern EBPer should ask 21st century questions about evidence such as “which evidence is most likely going to inform the multitude of decisions within this therapeutic interaction?”

The motion “This house believes that in the absence of research evidence an intervention should not be used” was an excellent prompt to revisit some fundamental questions about the relationship between research and clinical practice. We must be clear that interventions should, whatever, be based on evidence, and that is uncontroversial. The rejection of this motion IS NOT a green light for ‘clinical freedom’, basing predictions on past experience alone, heresy, clinical whims, forcing tradition, or maintaining habits.

The challenge we have is still to answer what – learning from the past 23 years – constitutes best available evidence to inform therapeutic decisions? In the 1990s we did ask this, but without sufficient critical analysis and without the great benefit of two decades of trying to implement data from existing methods to clinical practice.

We also need to stand up as a profession and be genuine, honest, and robust. We should not fall into the trap of deceit and rhetoric by claiming to be evidence-based when we don’t even know what that means.

The big question which we haven’t yet asked as a responsible profession is – to quote Michael Loughlin – “what precisely is it that we are buying into?”

Why don’t we lead the way in taking the best of what we know from scientific inquiry so far, and develop ways of generating evidence which actually serve to inform therapeutic decisions?

8 Comments

Filed under Uncategorized

Evidence-Based Physiotherapy: A Crisis in Movement

Being at the tail-end of a PhD in Evidence-Based Medicine, I recently re-read Trisha Greenhalgh et als’ BMJ Paper Evidence based medicine: a movement in crisis (see what I did there?) and now provide a plea for Physiotherapists / Physical Therapists the World over.

We are part of a wonderful profession, and also part of a fast changing world. It seems a good time now to reflect and act upon the past 20 years of growing evidence and information. These are some random reflections on Greenhalgh et als’ paper with Physiotherapy in mind.

Physiotherapists, please read and understand published data, but realise that this data is only meaningful when positioned within the narratives and socio-cultural contexts of our patients and our own experiences. Allow data – if sufficient – to free yourself from traditions and habits. Don’t be swayed by preposterous gadgetry and pretty colours but always look towards the data to drive positive ways of developing your practice. Stop handing out leaflets.

Human observations are prone to biases of perception and memory. Robust studies are designed to reduce such biases. Human observational biases can be easily controlled for by intellect. Most trials fail to control for biases sufficiently. Human observational biases are still evident though, for example the perception biases seen when interpreting the results of a trial. Treat real-life experience and outputs from studies as equally valid sources of evidence, which can both be highly fallible.

Stop inventing complex and unnecessary classification and diagnostic systems. They are not needed. Sub-classification is important, so pay attention to high quality studies which allow us to learn which interventions suit which patients. But the best systems are the simplest. Be aware though that the best sub-classification systems will also eventually sub-class down to N=1. At this point, population study derived data start to lose relevance. Evidence for patient management for the N=1 (i.e. your patient) needs to come from the source (i.e. your patient).

Touch people who need touching, this is therapy. Don’t touch people who don’t need touching, this is battery. Talk with people who need talking with, this is therapy. Judge when this becomes meaningless. Educate your patient by all means, but also let them educate you.

Pain science is undoubtedly important in evidence-based pain management. Pain scientists have reminded us that we have brains. That’s good. Heed pain science data, but stop fawning over pain scientists. They are not Gods. We no longer need ‘institutes’ and ‘organisations’ of pain science. If the data is good enough, it will speak for itself. Don’t fall into the trap of moving from ‘clinical guru worship’ to ‘research guru worship’. There are no gurus. Don’t be drawn-in by general theories of the world, e.g. pain, which are underpinned by fragile evidence, but do understand the potential ways forward such evidence might point. If you are a disciple of such trends, stop posting random quotes from random ‘pain’ therapists as if this were some sort of confirmatory proof of theory. It’s not. The easiest thing is to stop being a disciple, and start to think for yourself. A bit like a professional would. Ignoring biological aspects of our patients’ complaints is evidence-based silliness. Calls to abandon a biomedical model is evidence-based moronicy. And downright dangerous. Psycho-social dimensions are of critical importance to our reasoning and management. So is differentiating non-specific back pain from aortic aneurysm.

As a science, let us learn from other sciences. Experimental physics provide excellent data describing the Universe, but is reserved in making inferences to future events. Theoretical physics uses this data to better understand the world and consider ways to move forward. Where are our theorists?

As a human-centred profession, let us learn from the humanities. The idea of causation on which all physiotherapy research is based is 266 years old and philosophically and sociologically un-sound. Why don’t we look at developing research methodologies based on enriched notions of causation? Throwing data onto a stressed-out workforce won’t make that workforce do evidence-based practice. It will just stress it further. Let’s look at ways in which change can occur in complex social structures.

If you must adhere to clinical guidelines, then by all means do so but bear-in-mind that guidelines are more often than not administrative and political tools, with any clinical component aggregated out to a meaningless level.

Physical activity and exercises are, surprise surprise, looking like the things that really matter in our game. Movement is everything. Most of the time it doesn’t really matter too much how that occurs, as long as it does occur. It might involve touch, it might not. Movement helps people contribute to society and it keeps the world going. It also delays the onset of things like death. However, Government health and wellbeing agendas are weak and meaningless. Allow your patient to set their own health agenda. We all need exercise, but we don’t all need to do 50 one-arm pull-ups on barbed-wire with baying wolves at our feet. Nor do we need to run through man-made pools of mud pretending we’re in the army. Our job should be focussed on using the best of the data to work with our patients in a search of a way to restore and rehabilitate meaningful movement whether they have had knee pain, back pain, a stroke, respiratory disease, or cancer.

When talking with patients, don’t use relative risk and probability data in your conversations. Even really clever people don’t understanding what these mean. Incorporate absolute risk into your reasoning, but judge when and to what extent it is useful to share with your patient.

If you are organising a conference, try and engage delegates better by having fewer, shorter presentations (say, 10 minutes) and allow more time for questions (say, 40 minutes). If you are a conference delegate, ask questions. Conferences are still a valid way of sharing data and thought, but this only works if there is a two-way communication channel. Evidence can only be made meaningful via discourse.

If you are a research funder, PLEASE STOP FUNDING RIDICULOUS RCTs. Fund the good ones, of course. But you are the only people in the whole world who can facilitate a better understanding of how people manage multiple sources of information in complex social situations. Can you please fund work into this. This is evidence-based practice.

If you are a journal editor, please facilitate the dissemination of thought and knowledge towards understanding the integration of population data into individual decision-making, rather than worrying about your impact factor.

If you are a student, listen, engage, challenge. However, do not start your first day at clinical work by saying to your senior “where’s your evidence?” This is an utterly negative, unconstructive and unintellectual strategy. Rather, search for the areas of practice which could be better developed, work with others to develop ways to address these limitations. In the meantime, learn the craft of listening and communicating with your patients. You are the profession’s most precious resource. You are our future. Please be careful with the information you receive.

Greenhalgh et als’ paper marks a pivotal landmark in the course of evidence-based medicine. As they highlight, there is a lot of groundwork still to do, but the emphasis should firmly be on collaboration between all stakeholders. One dimension of Sackett’s original idea of EBM seems to have got lost over the last 20 years – the patient. Let Physiotherapy support the call for a campaign for real evidence.

48 Comments

Filed under Uncategorized

I Don’t Get Paid Enough To Think This Hard

For well over a decade, I have been teaching healthcare professionals, mainly physiotherapists, about stuff. Although wrapped up in many guises, this “stuff” has essentially been thinking. Thinking in healthcare professions is packaged up as clinical reasoning.  I’ve always thought this to be a good thing: that we work out possible diagnostic hypotheses with our patients, use the best of our knowledge, experience and evidence to test those hypotheses, and judge from a variety of evidence sources the best treatment options. The alternative is either blindly following set guidelines, or making random decisions.

I really enjoy teaching this stuff.  I love working with students to get the best out of their brains, and see their thought processes and their clinical practice develop.  I love the literature on this stuff, and have indeed often published about it myself. I have a pop-art poster of Mark Jones in my bedroom (Fig 1).

Image

Fig 1: My pop-art poster of Mark Jones, Clinical Reasoning guru.

I ran “Clinical Reasoning” modules at my place of work for undergraduate and postgraduates for years.  I have helped develop reasoning tools. I guess I think it’s fairly important to what we do as clinicians.

However, a few years ago whilst teaching on a course, halfway through a case study exercise, one of the delegates turned and said “I don’t get paid enough to think this hard”.  At the time, and for several years since, this struck me as astonishing – in a negative way. What? This is part of your job! This is how you can strive to get the best out of your patients; it’s demanded by your regulator; it’s a necessary condition of clinical practice; blah blah blah. But recently it struck me that he might have a point.

What is our price, and does this reflect the measures we go to to achieve our end? What absolute difference does it make investing the time, energy, resources necessary for “advanced thinking” to clinical outcomes? (we don’t know). Could we drift through our careers following guidelines and making random decisions, and still do OK for our patients? (maybe). How does our price compare with other “thinking” professions, Law, for example? (poorly). What is the impact of all this stuff on our emotional, social, and psychological, and physical status? (significant) How has doing this stuff changed in an era of evidence-based practice? (dramatically).

On the last point there, clinical reasoning may once have been a process of applying a line of logic to a patient contact episode: “they said they twisted this way, it hurts there, therefore this is the problem so I’ll wiggle this about for a bit”. Clinical reasoning is becoming more-and-more synonymous with evidence-based practice (EBP), and EBP looks very different to the above scenario. EBP is about integrating the best of the published evidence with our experiences and patient values. How do you do that!? Well, this is the stuff that I try and teach, and this may have been the tipping-point for our friend’s critical statement.

Consider the state of thinking in the modern healthcare world: First, the published evidence. There are at least 64 monthly peer-reviewed journals relevant to the average rehabilitation physiotherapist (that’s excluding relevant medical journals, in which there is a growing amount of physio-relevant data). These have an average of around 30 research papers each, each paper being around 8 detailed pages. That’s 15,360 pages of ‘evidence’ per month, or 768 per working day. Some, of course, won’t be relevant, but whichever way you look at it, this is an unmanageable amount of data to integrate into everyday clinical decision making. Many of these papers are reviewed and critiqued, so the clinician should be aware of these too. Many of these critiques are themselves critiqued, and this level of thinking and analysis would also be really useful in understanding the relationship between data and clinical decision-making. EBP does have tools to help with data-driven decision making. These require the clinician to have a continually evolving understanding of absolute and relative risk, the nuances of the idea of probability (don’t even get me started on that one), a Vorderman-esque mind – or at least the latest app to do the math for you, and time.

Arrhh, time. The average physiotherapist will, say, work an 8 hour day, seeing a patient on average every half-an-hour or so. That half-hour involves taking important information from the patient and undertaking the best physical tests (which are..?) and treatments (which are…?), then recording all of that (don’t forget the HPCP are on your back young man – a mate of a mate of someone I  know got suspended last week for shoddy note-keeping. How would I pay the mortgage?). So when is that evidence read, synthesised, and applied? No worries, in-service training sessions at lunch-time will help (no lunch or toileting for me then). What about evenings and weekends – yes, lots of thinking does occur here (but what about the wife and kids).  I know there is no training budget for physiotherapists, but you can do some extra on-call or private work to pay for those courses can’t you? (Yes. When?) You get annual leave don’t you? That’s another great opportunity to catch up on your thinking education (Cornwall will wait).

Thinking this hard costs. It costs time, money, energy, opportunity and health. Do we get paid enough to think this hard? Maybe our critical friend had a point. However, the pay isn’t going to change, so the thinking has to. Is this a signal that we are at a stage of development in healthcare when ‘thinking models’ need to be seriously revised in a rapidly evolving, data-driven world? Thinking was, is, and will always be central to optimal patient care, but how we do it needs to be re-analysed. Quickly. Think about it.

6 Comments

Filed under Uncategorized

Argument formation for academic writing

Image

Many students find it difficult to identify what it is that makes a good piece of academic writing. At the core of such writing is the nature and structure of the intellectual argument. Here is some information that we share with our Physiotherapy students at the University of Nottingham to help with their understanding of arguments. I hope you find it useful.

Argument formation

The idea of a basic argument is fairly simple. An argument is formed of ‘premises’ and ‘conclusions’. For a valid argument, in order for the conclusion to be true (which is what you want in an essay, i.e. you don’t want to draw false or unstable conclusions), the premises must be true. So, the classic example is:

Premise 1: All men are mortal

Premise 2: Socrates is a man

Conclusion: Socrates is mortal

Do you see that if P1 and P2 are true, then the conclusion HAS to be true?

So, if it REALLY IS true that all men are mortal, and it REALLY IS true that Socrates is indeed a man, then it HAS TO BE THE CASE that Socrates is mortal. Yes? Do you get that?

Make sure you fully understand this basic principle before reading any further!

OK, so let’s look at another example:

P1: Lucy is a physio

P2: All physios wear white tunics

Conclusion: Lucy wears a white tunic

Get it? Of course you do.

So the two examples above are cases of a good, robust deductive argument – the conclusion is deduced from the premises. We’ll come onto how this looks in an essay in a moment.

Now, here are four types of poor arguments:

Type 1: false premises

This is a simple mistake. Consider the above ‘physio’ example. You would have most likely noticed that the two premises are full of assumptions: 1) that Lucy is in fact a Physio, and 2) that all physios do in fact wear white tunics. The actual truth of the conclusion not only relies on the logical flow, but the accuracy of the detail within that flow. So, an argument can be logically correct – i.e. its logical form is robust, but the factual accuracy of the premises may render it poor.

This is very important in essay writing, and will be address again below.

Type 2: The inductive fallacy (over-generalising)

P1: I have seen 1 white swan

P2: I have seen 2 white swans

P3: I have seen 3 white swans, and so on….

Conclusion: all swans are white

This is a poor argument because it could always be the case that there is a black swan which you haven’t seen. Therefore the generalisation that “all swans are white” is false. So a Physio example:

P1: I have seen ultrasound work on ankle pain once

P2: I have seen ultrasound work on ankle pain twice

P3 ….n: etc etc

Conclusion: Ultrasound works for ankle pain

Type 3: another type of over-generalising – ideas and data

P1: There has been a lot of music in Nottingham lately

P2: Lots of people think that Nottingham is the music capital of Europe

Conclusion: Nottingham is the music capital of Europe

So the conclusion is not necessarily true, even though the premises might be true. Why?  Well, there are two issues:

i) Although the premises might be true, their relationship with each other, and the conclusion, is tenuous. Compare the robustness of the relationship between components in the first Socrates example, with those here.   See how the concepts of ‘mortality’ and ‘Socrates’ are distributed between the premises, linked by the idea of ‘man’.  Notice that ‘man’ does not appear in the conclusion – that idea has already done its job. ‘Socrates’ and ‘mortality’ are the only ideas that re-appear in the conclusion.

In the music example, there is no such pattern. Both ideas of ‘music’ and ‘Nottingham’ appear in both P1 and P2. They are not linked by a central, meaningful idea. P1 and P2 are simply independent commentaries on a similar theme.

Also note that in the Socrates example, both P1 and P2 are necessary conditions for the conclusions, as well as being independently insufficient for it, i.e. they are needed by each other, and by the conclusion. These relationships do not exist in the music example, e.g. that a lot of people thinking that Nottingham is the music capital of Europe is not a necessary condition for Nottingham being the music capital of Europe.

ii) There is missing data! To claim that “Nottingham is the music capital of Europe” relies on something other than what has happened in Nottingham and what people think. It relies on the music rate in other European cities.

MUSIC BREAK: Da da da da da da da da daaaa

As it happens, Nottingham most likely is the music capital of Europe! For example, here’s a great band which comes from Nottingham:

https://skiffleshow.bandcamp.com/album/escape-this-wicked-life

and you can “like” their Facebook page here:

https://www.facebook.com/dhlawrenceandthevaudevilleskiffleshow

MUSIC BREAK OVER.

Type 4: alternative explanations

Premise 1: Contraceptive pills prevent unwanted pregnancy.

Premise 2: John takes the contraceptive pill and he isn’t pregnant.

Conclusion: The contraceptive pill prevented John’s unwanted pregnancy.

Here, again, both P1 and P2 may well be true, but the conclusion isn’t true because there is an obvious alternative explanation for why John does not get pregnant – he is a man.

Constructing arguments in essay form

Now, how does all this relate to your academic writing? Simple. This basic line of reasoning is what we look for in your overall writing piece.

Here’s an over-simplified example: Let’s say you set out to write an essay on the effectiveness manual therapy on neck pain. You might structure your argument something like this:

P1:  Manual therapy for neck pain has some RCT-level evidence

P2: RCTs give good evidence of effectiveness

C: Manual therapy is effective for neck pain.

This seems fairly simple right? But let’s break it down:

The conclusion is wholly reliant on the truthfulness of the premises. In other words if P1 or P2 were false, so would be the conclusion. Further, P1 and P2 are both necessary yet individually insufficient conditions for C.  Notice that the ideas of ‘manual therapy’ and ‘effectiveness’ are linked by the idea of ‘RCTs’ in the premises, and the ‘RCT’ does not appear again in the conclusion.

The argument has avoided the induction fallacy of over-generalisation. There is no obvious over-generalisation of the conclusion. So, you could have said:

P1: 10 case studies show that manual therapy is good for neck pain

Conclusion: manual therapy is good for neck pain

This would have fallen into the induction fallacy

The premises and the conclusion are satisfactorily related (unlike the music example), and have this avoided the ‘lack of robustness / missing data’ issues. So, you could have said:

P1: 10 case studies show that manual therapy is good for neck pain

P2: a number of authors state that manual therapy is good for neck pain

Conclusion: manual therapy is good for neck pain

This would have been a mistake, as per the music example. There is missing data, e.g. no consideration of tests of effectiveness.

So we can see how easy it is to develop a valid and robust argument to build your essay around. If you have avoided the common errors in logical form, all you need to do now is to test the truthfulness of the individual premises. This means, in the case given here, you would be discussing the relative quality of different types of manual therapy studies, and trying to show that manual therapy has some RCT-level studies, before drawing your logical conclusion. Once you have those conclusions, you can then go on to discuss the consequences / implications / context etc of them.

Remember two main things:

1)      Make sure you have a VALID LOGICAL STRUCTURE

2)      When you have that, the aim of your essay is to DEMONSTRATE THE TRUTH OF THE PREMISES.

If you show these two simple things, you are half-way there. The other half is how clearly and concisely you can write!

And finally, I recommend to buy “Rulebook for Arguments” by Anthony Weston. You can get it for about £4 of Amazon.

Happy arguing 🙂

1 Comment

Filed under Uncategorized

Medically Unexplained Symptoms & Causation

Here is a pre-publication copy of the latest paper from the Causation in Medicine group, a sub-group of the Norwegian Research Council funded Causation in Science project (http://www.umb.no/causci/):

MUS 2013a

Leave a comment

Filed under Uncategorized