This is the second abstract from my chapter on communicating risk in Grieve’s Modern Musculoskeletal Therapy 2015 (Vol 4)
Part 1 introduced the idea of and challenges about understating and communicating risk. Part 2 now focuses on relative v absolute risk, probabilities v natural frequencies, and communication tools.
Relative versus absolute risk
Misinterpretations of absolute and relative risk contribute to data users’ anxieties and misunderstandings (Mason et al 2008). Absolute risk (AR) can be the prevalence (or incidence), or indicate the absolute difference in risk between two groups. Relative risks (RR) – and their counterparts, odds ratios (OR) – are products of the division of AR in each group, to form a relative difference. RRs may help to make comparative judgments, e.g. “this is riskier than that”. This way of communicating is encouraged in evidence-based medicine. However, RRs are more persuasive and make differences in risk appear larger than they are (Gigerenzer et al 2007). They are over-reported in lay-press and research reports where authors want to exaggerate differences (Gigerenzer et al 2010;).
‘‘If the absolute risk is low, even if the relative risk is signiﬁcantly increased to exposed individuals, the actual risk to exposed individuals will still be very low” (Gordis 2009)
A related statistic to absolute risk is number needed to harm (NNH). NNH are the inverse of the absolute risk difference. Although NNH might seem to hold informative content (Sainani 2012), a recent Cochrane review concluded that this was poorly understood by patients and clinicians (Akl et al 2011). In summary both RR (including ORs) and NNH are poor means of communicating risk, and AR should be favoured (Fagerlin 2011, Ahmed et al 2012). Table 1 shows examples of how the same risk can be represented in these three different perspectives.
Table 1 Communicating the numerical risk of stroke following manipulation: Although precise figures are difficult to obtain, the best existing estimates of rates of events to calculate risk are used. To represent a ‘worse-case’ scenario, we can use one of the highest estimates of rate of stroke following manipulation (6:100,000, Theil et al 2007), and a conservative assumption about risk in a non-manipulation group, say 1:100,000 (e.g. Boyle et al 2009)
Probabilities v Natural frequencies
So far we have considered risk expressed as some sort of probability. Alternatively, natural frequencies (NF) can be a clearer way of representing risk (Akl et al 2011, Gigerenzer 2011). NFs are joint occurrences of two events e.g. positive result on a clinical test and the presence of a condition. In terms of risk prediction, we may be familiar with probabilistic ideas of specificity, sensitivity, positive predictive value, etc. Although commonly used (for example, these form the core of clinical predication rules), these statistics are a consistent source of confusion and error (Eddy 1982, Cahan et al 2003, Ghosh et al 2004). Reports have suggested that the human mind might be better evolved to understand risk in terms of NFs (Gigerenzer and Huffage 1996; Cosmides and Tooby 1996). NFs are absolute frequencies arising from observed data. Risk representation using NFs avoids the complex statistics of probability expression, whilst maintaining the mathematical rigor and Bayesian logic necessary to calculate risk. Table 2 compares probabilistic statistics and NFs for adverse event prediction.
Table 2 Comparison of risk interpretation between using conditional probabilities and natural frequencies for the ability of a functional positional test with high specificity to detect presence of vertebrobasilar insufficiency (VBI)
(1based on data presented in Hutting et al 2013 (only probability estimates of positive test results have been included for clarity). 2based on prevalence of VBI in a normal population of 1:100,000 (Boyle et al 2009); median VBI Test specificity of 84% (from Hutting’s range of 67% – 100%), indicating a false positive rate of 16%.)
The high figures of the conditional probabilities suggest that the VBI test could be useful in detecting presence of disease. The result of the NF calculations in fact show that in a more intuitively natural context, the chance of having the disease following a positive test is still extremely low (0.006%). This is a common fallacy associated with interpretation from probability statements (stemming from Barre-Hill 1980). Important to note is that both methods are mathematically valid. It is only the perception of risk which has changed.
Stacey et al (2011) found that use of decision aids can improve patients’ knowledge and perception of risk, and improve shared decision making. Such aids include visual representations of risk, and these have many desirable properties, e.g. reveal otherwise undetected data patterns; attract attention; evoke specific mathematical operations (Lipkus and Hollands 1999). Specific types of aids are useful for specific types of risk, e.g. bar charts for group comparisons; line graphs for temporal interactions among risk factors; pie-charts for showing risk proportions etc (Lipkus 2007). Icon arrays are also used to display population proportions, and rare events can be demonstrated in magnified or circular images. Figures 1 and 2 shows examples of graphical images used for communicating common and rare events.
Figure 1 Two ways of representing risk of minor adverse events following manipulation. Data from Carlesso et al, (2010): pooled relative risk (RR) from meta-analysis, RR = 1.96, or 194 events per 1000 with manipulation versus 99 per 1000 with no manipulation (control). A) icon array pictorially representing absolute risk; B) bar-graph demonstrating difference between the two groups.
Figure 2 Representing rare risk events.
- A) A circle diagram representing the absolute risk of serious adverse event following manipulation. The blue circle represents 100,000 units, and the red dots represent the number of cases per 100,000.
- B) From prevalence data on vertebrosbasilar insufficiency (VBI) (Boyle et al 2009) and diagnostic utility of a VBI test (Hutting et al 2013), this graph shows a population of 100,000 (the large blue circle), the proportion who test positive on a VBI test (16,000: the yellow circle), and the proportion of people who will actually have VBI (1: the red dot)
The way risk is framed is considered important for effective communication (Edwards et al 2001). Framing presents logically equivalent information in different ways. Generally, risks can be framed positively (gain-framed) or negatively (loss-framed). We might gain-frame the risk of stroke following manual therapy as “you are very unlikely to experience stroke following this intervention”, or loss-frame it as “this treatment could cause you to have a stroke”. Gain-framing can be more effective if the aim is preventative behaviour with an outcome of some certainty (Fagerlin and Peters 2011) e.g. “exercising more will reduce cardio-vascular risk” would be more effective than “if you don’t exercise, you will have an increased risk of cardio-vascular disease”. However, loss-framing is generally more effective, and especially so when concerned with uncertain risks (Edwards et al 2001).
Edwards and Elwyn (2000) reported that risk estimates based on personal risk factors were most effective in improving patient outcomes. A subsequent Cochrane review reported that compared to generalised numerical risk communication, personalised risk communication improved knowledge, perception and uptake of risk-reducing interventions (Edwards et al 2006). Personalised risk may include attempts to identify a smaller sub-group akin to the individual patient, and/or consideration of the individuals own risk factors for an event. This dimension of risk communication contextualises population data estimates within single patients’ risk factors, together with their values and world-view. Box 1 highlights the operationalization of personalising risk.
Box 1: Key messages in communicating risk
Thank you for reading, and stay risky!
Ahmed H, Naik G, Willoughby H, Edwards AGK 2012 Communicating risk. British Medical Journal 344:e3996
Akl EA, Oxman AD, Herrin J, et al 2011 Using alternative statistical formats for presenting risks and risk reductions. Cochrane Database Systematic Reviews 3:CD006776.
Boyle E, Côte P, Grier AR 2009 Examining vertebrobasilar artery stroke in two Canadian provinces. Journal of Manipulative and Physiological Therapeutics 32:S194-200.
Cahan A, Gilon D, Manor O 2003 Probabilistic reasoning and clinical decision-making: do doctors overestimate diagnostic probabilities? QJM 96:763–9.
Carlesso LC, Gross AR, Santaguida PL, Burnie S, Voth S, Sadi J. 2010 Adverse events associated with the use of cervical manipulation and mobilization for the treatment of neck pain in adults: a systematic review. Manual Therapy 15(5):434-44. doi: 10.1016/j.math.2010.02.006.
Cosmides L, Tooby J 1996 Are humans good intuitive statisticians after all? Rethinking some conclusions from the literature on judgement under uncertainty. Cognition. 58(1):1-73
Eddy DM 1982 Probabilistic reasoning in clinical medicine: problems and opportunities. In: Kahneman D, Sloviv, Tversky A (eds) Judgement under uncertainty: Heuristics and Biases. Cambridge University Press, Cambridge UK, p249–67
Edwards, A, Elwyn G, Covey J et al. 2001. “Presenting risk information–A review of the effects of ‘framing’ and other manipulations on patient outcomes.” Journal of Health Communication 6(1): 61-82.
Edwards AG, Evans R, Dundon J 2006 Personalised risk communication for informed decision making about taking screening tests. Cochrane Database of Systematic Reviews 4:CD001865.
Fagerlin A, Zikmund-Fisher BJ, Ubel PA 2011 Helping patients decide: ten steps to better risk communication. Journal of the National Cancer Institute 103:1436-43.
Fagerlin A, Peters E 2011 Quantitative Information. In: Fischhoff B, BrewerNT, Downs JS (eds) Communicating risks and benefits: an evidence-based user’s guide. Silver Spring, MD: US Department of Health and Human Services, Food and Drug Administration p 53–64.
Ghosh AK, Ghosh K, Erwin PJ 2004 Do medical students and physicians understand probability? QJM.;97:53-55.
Gigerenzer G, Huffage U 1996 How to improve Bayesian reasoning without instruction: frequency formats. Psychological Reviews. 102:684-704
Gigerenzer G, Gaissmaier W, Kurz-Milcke E 2007 Helping doctors and patients make sense of health statistics. Psychological Science in the Public Interest 8:53-96.
Gigerenzer G, Wegworth O, Feufel M 2010 Misleading communication of risk: Editors should enforce transparent reporting in abstracts. British Medical Journal 341:791-792
Gordis L 2009 Epidemiology. Saunders, Philadelphia, p 102
Lipkus IM, Hollands J1 999 The visual communication of risk.Journal of the National Cancer Institute 25: 149-63
Mason D, Prevost AT, Sutton S 2008 Perceptions of absolute versus relative differences between personal and comparison health risk. Health Psychology 7(1):87-92.
Politi MC, Han PK, Col NF. 2007 Communicating the uncertainty of harms and benefits of medical interventions. Medical Decision Making 27:681-95
Sainani KL 2012 Communicating risks clearly: absolute risk and numbers needed to treat. American Academy of Physical Medicine and Rehabilitation. 4:220-222
Speigelhalter DJ 2008 Understanding uncertainty. Annals of Family Medicine 6(3): 196-197.
Stacey D, Bennett CL, Barry MJ 2011 et al. Decision aids for people facing health treatment or screening decisions. Cochrane Database of Systematic Reviews 1:CD001431.
Thiel HW, Bolton JE, Docherty S et al 2007 Safety of chiropractic manipulation of the cervical spine: a prospective national survey. Spine 32(21):2375-2378
Buy Grieve’s Modern Musculoskeletal Therapy here.