Monthly Archives: September 2016

Communicating Risk: Part 2

This is the second abstract from my chapter on communicating risk in Grieve’s Modern Musculoskeletal Therapy 2015 (Vol 4) 

Part 1 introduced the idea of and challenges about understating and communicating risk. Part 2 now focuses on relative v absolute risk, probabilities v natural frequencies, and communication tools.


Relative versus absolute risk

Misinterpretations of absolute and relative risk contribute to data users’ anxieties and misunderstandings (Mason et al 2008).  Absolute risk (AR) can be the prevalence (or incidence), or indicate the absolute difference in risk between two groups. Relative risks (RR) – and their counterparts, odds ratios (OR) – are products of the division of AR in each group, to form a relative difference. RRs may help to make comparative judgments, e.g. “this is riskier than that”. This way of communicating is encouraged in evidence-based medicine. However, RRs are more persuasive and make differences in risk appear larger than they are (Gigerenzer et al 2007). They are over-reported in lay-press and research reports where authors want to exaggerate differences (Gigerenzer et al 2010;).

‘‘If the absolute risk is low, even if the relative risk is significantly increased to exposed individuals, the actual risk to exposed individuals will still be very low” (Gordis 2009)

A related statistic to absolute risk is number needed to harm (NNH). NNH are the inverse of the absolute risk difference. Although NNH might seem to hold informative content (Sainani 2012), a recent Cochrane review concluded that this was poorly understood by patients and clinicians (Akl et al 2011).  In summary both RR (including ORs) and NNH are poor means of communicating risk, and AR should be favoured (Fagerlin 2011, Ahmed et al 2012). Table 1 shows examples of how the same risk can be represented in these three different perspectives.

Table 1 Communicating the numerical risk of stroke following manipulation
: Although precise figures are difficult to obtain, the best existing estimates of rates of events to calculate risk are used. To represent a ‘worse-case’ scenario, we can use one of the highest estimates of rate of stroke following manipulation (6:100,000,  Theil et al  2007), and a conservative assumption about risk in a non-manipulation group, say 1:100,000 (e.g. Boyle et al 2009)


Probabilities v Natural frequencies

So far we have considered risk expressed as some sort of probability. Alternatively, natural frequencies (NF) can be a clearer way of representing risk (Akl et al 2011, Gigerenzer 2011). NFs are joint occurrences of two events e.g. positive result on a clinical test and the presence of a condition. In terms of risk prediction, we may be familiar with probabilistic ideas of specificity, sensitivity, positive predictive value, etc. Although commonly used (for example, these form the core of clinical predication rules), these statistics are a consistent source of confusion and error (Eddy 1982, Cahan et al 2003, Ghosh et al 2004).  Reports have suggested that the human mind might be better evolved to understand risk in terms of NFs (Gigerenzer and Huffage 1996; Cosmides and Tooby 1996).  NFs are absolute frequencies arising from observed data. Risk representation using NFs avoids the complex statistics of probability expression, whilst maintaining the mathematical rigor and Bayesian logic necessary to calculate risk. Table 2 compares probabilistic statistics and NFs for adverse event prediction.

table-2-actualTable 2 Comparison of risk interpretation between using conditional probabilities and natural frequencies for the ability of a functional positional test with high specificity to detect presence of vertebrobasilar insufficiency (VBI)

(1based on data presented in Hutting et al 2013 (only probability estimates of positive test results have been included for clarity). 2based on prevalence of VBI in a normal population of 1:100,000 (Boyle et al 2009); median VBI Test specificity of 84% (from Hutting’s range of 67% – 100%), indicating a false positive rate of 16%.)

The high figures of the conditional probabilities suggest that the VBI test could be useful in detecting presence of disease. The result of the NF calculations in fact show that in a more intuitively natural context, the chance of having the disease following a positive test is still extremely low (0.006%). This is a common fallacy associated with interpretation from probability statements (stemming from Barre-Hill 1980). Important to note is that both methods are mathematically valid. It is only the perception of risk which has changed.


Communication tools

Stacey et al (2011) found that use of decision aids can improve patients’ knowledge and perception of risk, and improve shared decision making. Such aids include visual representations of risk, and these have many desirable properties, e.g. reveal otherwise undetected data patterns; attract attention; evoke specific mathematical operations (Lipkus and Hollands 1999). Specific types of aids are useful for specific types of risk, e.g. bar charts for group comparisons; line graphs for temporal interactions among risk factors; pie-charts for showing risk proportions etc (Lipkus 2007).  Icon arrays are also used to display population proportions, and rare events can be demonstrated in magnified or circular images. Figures 1 and 2 shows examples of graphical images used for communicating common and rare events.



Figure 1 Two ways of representing risk of minor adverse events following manipulation. Data from Carlesso et al, (2010): pooled relative risk (RR) from meta-analysis, RR = 1.96, or 194 events per 1000 with manipulation versus 99 per 1000 with no manipulation (control).  A) icon array pictorially representing absolute risk; B) bar-graph demonstrating difference between the two groups.



Figure 2 Representing rare risk events.

  1. A) A circle diagram representing the absolute risk of serious adverse event following manipulation. The blue circle represents 100,000 units, and the red dots represent the number of cases per 100,000.
  2. B) From prevalence data on vertebrosbasilar insufficiency (VBI) (Boyle et al 2009) and diagnostic utility of a VBI test (Hutting et al 2013), this graph shows a population of 100,000 (the large blue circle), the proportion who test positive on a VBI test (16,000: the yellow circle), and the proportion of people who will actually have VBI (1: the red dot)


Framing risk

The way risk is framed is considered important for effective communication (Edwards et al 2001). Framing presents logically equivalent information in different ways. Generally, risks can be framed positively (gain-framed) or negatively (loss-framed).  We might gain-frame the risk of stroke following manual therapy as “you are very unlikely to experience stroke following this intervention”, or loss-frame it as “this treatment could cause you to have a stroke”.  Gain-framing can be more effective if the aim is preventative behaviour with an outcome of some certainty (Fagerlin and Peters 2011) e.g. “exercising more will reduce cardio-vascular risk” would be more effective than “if you don’t exercise, you will have an increased risk of cardio-vascular disease”.  However, loss-framing is generally more effective, and especially so when concerned with uncertain risks (Edwards et al 2001).

Personalising risk

Edwards and Elwyn (2000) reported that risk estimates based on personal risk factors were most effective in improving patient outcomes. A subsequent Cochrane review reported that compared to generalised numerical risk communication, personalised risk communication improved knowledge, perception and uptake of risk-reducing interventions (Edwards et al 2006). Personalised risk may include attempts to identify a smaller sub-group akin to the individual patient, and/or consideration of the individuals own risk factors for an event. This dimension of risk communication contextualises population data estimates within single patients’ risk factors, together with their values and world-view. Box 1 highlights the operationalization of personalising risk.

Box 1: Key messages in communicating risk


Thank you for reading, and stay risky!


Ahmed H, Naik G, Willoughby H, Edwards AGK 2012 Communicating risk. British Medical Journal 344:e3996

Akl EA, Oxman AD, Herrin J, et al 2011 Using alternative statistical formats for presenting risks and risk reductions. Cochrane Database Systematic Reviews 3:CD006776.

Barre-Hill M 1980 The base-rate fallacy in probability judgments Acta Psychologica 44(3): 211–233

Boyle E, Côte P, Grier AR 2009 Examining vertebrobasilar artery stroke in two Canadian provinces. Journal of Manipulative and Physiological Therapeutics 32:S194-200.

Cahan A, Gilon D, Manor O 2003 Probabilistic reasoning and clinical decision-making: do doctors overestimate diagnostic probabilities? QJM 96:763–9.

Carlesso LC, Gross AR, Santaguida PL, Burnie S, Voth S, Sadi J. 2010 Adverse events associated with the use of cervical manipulation and mobilization  for the treatment of neck pain in adults: a systematic review. Manual Therapy 15(5):434-44. doi: 10.1016/j.math.2010.02.006.

Cosmides L, Tooby J 1996 Are humans good intuitive statisticians after all? Rethinking some conclusions from the literature on judgement under uncertainty. Cognition. 58(1):1-73

Eddy DM 1982 Probabilistic reasoning in clinical medicine: problems and opportunities. In: Kahneman D, Sloviv, Tversky A (eds) Judgement under uncertainty: Heuristics and Biases. Cambridge University Press, Cambridge UK, p249–67

Edwards, A, Elwyn G, Covey J et al. 2001. “Presenting risk information–A review of the effects of ‘framing’ and other manipulations on patient outcomes.” Journal of Health Communication 6(1): 61-82.

Edwards AG, Evans R, Dundon J 2006  Personalised risk communication for informed decision making about taking screening tests. Cochrane Database of Systematic Reviews 4:CD001865.

Fagerlin A, Zikmund-Fisher BJ, Ubel PA 2011 Helping patients decide: ten steps to better risk communication. Journal of the  National Cancer Institute 103:1436-43.

Fagerlin A, Peters E 2011 Quantitative Information. In: Fischhoff B, BrewerNT, Downs JS (eds) Communicating risks and benefits: an evidence-based user’s guide. Silver Spring, MD: US Department of Health and Human Services, Food and Drug Administration p 53–64.

Ghosh AK, Ghosh K, Erwin PJ 2004 Do medical students and physicians understand probability? QJM.;97:53-55.

Gigerenzer G, Huffage U 1996 How to improve Bayesian reasoning without instruction: frequency formats. Psychological Reviews. 102:684-704

Gigerenzer G, Gaissmaier W, Kurz-Milcke E 2007 Helping doctors and patients make sense of health statistics. Psychological Science in the Public Interest 8:53-96.

Gigerenzer G, Wegworth O, Feufel M 2010 Misleading communication of risk: Editors should enforce transparent reporting in abstracts. British Medical Journal 341:791-792

Gordis L 2009 Epidemiology. Saunders, Philadelphia, p 102

Hutting NVerhagen APVijverman V et al 2013 Diagnostic accuracy of premanipulative vertebrobasilar insufficiency tests: a systematic review. Manual Therapy 18(3):177-82

Lipkus IM, Hollands J1 999 The visual communication of risk.Journal of the National Cancer Institute 25: 149-63

Mason D, Prevost AT, Sutton S 2008 Perceptions of absolute versus relative differences between personal and comparison health risk. Health Psychology 7(1):87-92.

Politi MC, Han PK, Col NF. 2007 Communicating the uncertainty of harms and benefits of medical interventions. Medical Decision Making 27:681-95

Sainani KL 2012 Communicating risks clearly: absolute risk and numbers needed to treat.  American Academy of Physical Medicine and Rehabilitation. 4:220-222

Speigelhalter DJ 2008 Understanding uncertainty. Annals of Family Medicine 6(3): 196-197.

Stacey D, Bennett CL, Barry MJ 2011 et al. Decision aids for people facing health treatment or screening decisions. Cochrane Database of Systematic Reviews 1:CD001431.

Thiel HW, Bolton JE, Docherty S et al 2007  Safety of chiropractic manipulation of the cervical spine: a prospective national survey. Spine 32(21):2375-2378

Buy Grieve’s Modern Musculoskeletal Therapy here.

Leave a comment

Filed under Uncategorized

Communicating Risk: Part 1

Here is an abstract from my chapter on communicating risk in Grieve’s Modern Musculoskeletal Therapy 2015 (Vol 4) 


Risk is the probability an event will give rise to harm (Edwards et al 2001). As healthcare professionals, communicating risk is central to patient, peer, and public interactions. Manual therapy doesn’t hold the severity of risks other professions do, e.g. medicine – we rarely consider death as a risk, although there are situations where this might be the case.  Less severe risks might, for example, be transient unwanted responses to treatment. Nevertheless, we have a responsibility to consider and communicate risk as best we can to make the best clinical decision. This section summarises evidence and thought on the best ways to communicate risk to optimise shared decision-making.

Although communicating risk might seem straightforward, evidence reveals complexity, contradiction, and ambiguity. Further, we should accept that human-beings, and particularly health care professionals, are not good at understanding risk, let alone communicating it (Gigerenzer 2002).  Risk communication has become increasingly important with publication of data and evidence-based practice. In contrast to traditional ‘gut-feelings’ about risk, it is becoming possible to make data-informed judgements. Despite this numerical dimension, there is still uncertainty in understanding and communicating risk.  Paradoxically, communicating uncertain risk judgements using numerical ranges can worsen understanding, credibility, and perceptions of risk (Longman et al 2012).  This section now focuses on understanding risk; communication tools; and framing risk.


Understanding risk

Healthcare professionals are poor at understanding numbers (Ahmed et al 2012, Gigerenzer 2002). Gigerenzer  et al  (2007) reported only 25% of subjects correctly identified 1 in 1000 as being the same as 0.1% , coining the phrase ‘collective statistical illiteracy’ in relation to health statistics users.  Education and numeracy levels have little impact on risk judgement or understanding (Lipkus et al 2001, Gigerenzer and Galesic 2012).  Consensus on the best ways for health professionals to communicate risk is lacking (Ghosh and Ghosh 2005).  These facts create barriers to communication, and can lead to aberrant use of research-generated data (Moyer 2012).  Numerical interpretations of probability are necessary yet insufficient conditions of clinicians’ understanding of risk. Risk communication should be inclusive of the numerical probability of an unwanted event happening, together with the effect of this on a patient; importance of the effect; and the context in which the risk might occur (Edwards 2009).

“every representation of risk carries its own connotations and biases that may vary according to the individual’s perspective concerning the way the world works” (Speigelhalter 2008)


Understanding probabilities

What does 5% mean? Is this the same as 0.05?  Does 5 out of 100 mean the same thing as 50 out of 1000?  Do the odds of 1:20-for say the same as 19:1-against? These are all mathematically valid expressions of the same data relating to probability judgment, but can and do mean different things. But what actually is a 5% risk? If I said you had a 5% chance of increased pain following intervention X, how do you interpret that? Does this mean you might be one of the 5 out of 100 people who’ll experience pain (in which case your probability would actually be 20%)? or that in every 100 patients I treat, 5 experience pain? Does it mean if you had 100 treatments, you’d experience pain 5 times? Does it mean that in 5% of the time, people experience pain? Or that 5 out of every 100 manual therapists induce pain to all their patients?  Is this 5% epistemological – i.e. it is already decided that you’ll have pain, but you just don’t know it yet, to the degree of 5%; or is it aleatory – i.e. a completely random notion to the degree of 5% that you will or won’t experience pain? Such variables should be considered when communicating risk.

The first stage in effective communication is establishing the reference class to which the probability relates e.g. time; location; person. In using population data for risk communication, most of the time the reference class will be historical. i.e. data from past events are used to inform the chance of the next event. Embedding a new individual event in data from a past population should carry some additional judgment, as new informative knowledge may be ignored. Spiegelhalter ’s report of pre-Obama odds on a black US president is a good example: 43/43 past US Presidents were white, indicating a statistical prediction of almost certainty of a 44th white President (Spiegelhalter 2008).


Part 2 will look at the relative v absolute risk; probabilities v natural frequencies; and framing of risk.


Ahmed H, Naik G, Willoughby H, Edwards AGK 2012 Communicating risk. British Medical Journal 344:e3996

Edwards, A, Elwyn G, Covey J et al. 2001. “Presenting risk information–A review of the effects of ‘framing’ and other manipulations on patient outcomes.” Journal of Health Communication 6(1): 61-82.

Ghosh AK, Ghosh K 2005 Translating evidence based information into effective risk communication: current challenges and opportunities. Journal of Laboratory and Clinical Medicine 145(4):171–180.

Gigerenzer G 2002 How innumeracy can be exploited. In: Reckoning with risk—learning tolive with uncertainty. 1st ed. Penguin Press, p 201-10.

Gigerenzer G, Gaissmaier W, Kurz-Milcke E 2007 Helping doctors and patients make sense of health statistics. Psychological Science in the Public Interest 8:53-96.

Gigerenzer G, Galesic M 2012 Why do single event probabilities confuse patients. British Medical Journal 344:e245

Lipkus IM, Samsa G, Rimmer BK 2001 General performance on a numeracy scale amonghighly educated samples. Medical Decision Making 21:37-44.

Longman T, Turner RM, King M et al 2012 The effects of communicating uncertainty in quantities health risk estimates. Patient Education and Counselling 89: 252-259

Moyer VA 2012 What we don’t know can hurt our patients: physician innumeracy and overuse of screening tests. Annals of Internal Medicine 156:392-393.

Speigelhalter DJ 2008 Understanding uncertainty. Annals of Family Medicine 6(3): 196-197.

Buy Grieve’s Modern Musculoskeletal Therapy here.

Leave a comment

Filed under Uncategorized