Decision Making in Emergency Medicine: We can’t escape bias

Decision Making in Emergency Medicine
Cite this article as:
Morgenstern, J. Decision Making in Emergency Medicine: We can’t escape bias, First10EM, March 7, 2022. Available at:
https://doi.org/10.51684/FIRS.125798

The human mind is imperfect. We all make mistakes. We are all susceptible to bias. Although we love to talk about and perform procedures, emergency medicine is really all about making difficult decisions, and so we all need a thorough understanding of how our minds work, how mistakes are made, and hopefully how to avoid them. (There is a prior 4 part series on the topic that can be found here.)Through learning about the various cognitive biases, and identifying some strategies to mitigate common errors, the hope is that we will be able to avoid future mistakes. For that reason, I was thrilled to be invited to take part in the writing of a textbook called “Decision Making in Emergency Medicine”. In each chapter, we discuss a different bias, show it in practice through multiple clinical scenarios, and discuss possible mitigating factors. Although we will never be able to completely avoid error, I think this textbook is essential reading for any emergency clinician.

The publishers of the book have been kind enough to allow me to share a couple of the chapters I wrote for the book to give you a taste of what it is trying to accomplish. This is the introduction I wrote entitled “we can’t escape bias”, which covers some limitations of cognitive theory in medicine:


We can’t escape bias

As this book clearly demonstrates, the human mind is imperfect. We all make mistakes. We are all susceptible to bias. Through learning about the various cognitive biases, and identifying some strategies to mitigate common errors, the hope is that readers will be able to avoid future mistakes. Unfortunately, there are limitations to the application of cognitive theory in medicine. Even armed with the wealth of knowledge provided by this book, we will still make mistakes.

It is unlikely that we will ever completely eliminate medical error. The decisions we make are incredibly complex, and the human mind is inherently fallible. Integrating what we know about cognitive theory and psychology into medicine is a logical step forward, but there are significant limitations, both theoretical and practical, to the application of cognitive theory in medicine. This chapter explores some of those limitations. 

Theoretical problems

An assumption that underlies much of this book is that, although the human mind is fallible, it also has the tools to self-correct. This is often explained in terms of dual process theory. Most of our thinking is rapid, unconscious, and intuitive – system 1 thinking. However, system 1 thinking is also prone to bias. Luckily, we are also capable of slower, contemplative, analytical thought – system 2 thinking. Most proposed solutions for biased thinking involve recognizing faulty type 1 thinking, and shifting to the presumably more accurate type 2 thinking. 

This simple blueprint may be misleading. Type 1 thinking is not always bad and type 2 thinking is not always better. In fact, especially when it comes to experts like physicians, it isn’t clear that thinking is so easily dichotomized. The clean distinction between type 1 and type 2 thinking is based largely on studies of undergraduate students, usually performing tasks in which they lack expertise, and so it isn’t clear that these results are applicable to expert medical decision making. 

The heuristics employed in type 1 thinking are efficient mental strategies that help us deal with uncertainty and ambiguity. Experts often use heuristics very effectively. In fact, in some scenarios, heuristics may lead to better decisions than analytical thinking. (Croskerry 2005; Eva 2005; Monteiro 2013) Discussions about cognitive biases tend to overemphasize the harms of using heuristics, while ignoring their many benefits. In medical emergencies, the speed of (well trained) type 1 thinking is almost certainly more important than the accuracy of formal analytic thought. Although it occasionally fails, it is important to recognize that type 1 thinking is not inherently bad. (Tverysky 1974; Norman 2010; Dhaliwhal 2017) 

Similarly, although analytic thought results in more accurate decisions in some settings, it is by no means infallible. In fact, conscious reasoning can sometimes produce worse results, because type 2 thinking puts a heavy load on working memory, which has significant limitations. (Norman 2010) Furthermore, many of the described cognitive biases also impair type 2 thinking. For example, premature closure and confirmation bias are both phenomena that arise during data gathering and synthesis, and are therefore more likely to be associated with type 2 thinking. (Norman 2010; Norman 2017)

A final and significant problem for dual process theory is the poorly defined interface between systems 1 and 2. How exactly is one supposed to effectively and consistently transition from type 1 to type 2 thinking? System 1 is generally described as always active, rapidly sorting through the avalanche of available data. Meanwhile, system 2 is described as monitoring system 1 and making corrections as necessary. However, it is not clear how that monitoring happens. What triggers the transition from system 1 to system 2? The act of monitoring would seem to require rapid analysis and pattern recognition to identify possible errors. Thus, the monitoring of system 1 sounds like a system 1 process, which presumably would also be prone to errors. 

If we want to correct errors we need to be able to recognize those errors. Strategies to mitigate cognitive errors are based on the major assumption that we have active control over our decision making processes. They assume that, in the moment, we will be able to recognize that our thinking is biased and flip from non-analytical to analytical thinking. Unfortunately, there is little evidence that this process occurs reliably. (Eva 2005)

It seems like a simple task – we recognize errors in other people’s thinking all the time. However, the blind spot bias tells us that we have a much harder time identifying our own biases. In fact, a core paradox of cognitive theory is that you cannot know that you are wrong. While in the midst of making a mistake, being wrong feels exactly like being right. (Dhaliwhal 2017) Thus, although we can recognize past errors, there is actually no mechanism that alerts us that we are currently wrong. 

Much like understanding the concept of a visual blind spot does not eliminate the blindness, simply understanding the existence of cognitive biases does not prevent them from occurring. In fact, Daniel Kakneman (the Nobel Prize winning originator of dual process theory) says that after 30 years of study, although he can more readily recognize errors in others, he isn’t sure that he is any better at avoiding these biases himself. (Kahneman 2011)

Biases are often more complex than we make them seem

Individual biases are generally more complex than we initially realize. We tend to talk about biases as dichotomous. We either committed an error or we didn’t; our thinking was either biased or it wasn’t. However, much of the research describes behaviour that falls into a grey area between those two extremes. 

For example, although the original research on base rate neglect involved participants completely ignoring the base rate, further research has made it clear that the base rate is often considered, and errors, when they occur, mostly arise from not fully adjusting for the base rate, rather than completely ignoring it. Furthermore, the extent of the error is significantly influenced by the specifics of the scenario, and many “biased” results can be explained by rational thinking that simply conflicts with researcher expectations. (Klayman 1995; Koehler 2010)

The majority of the research establishing cognitive biases was performed in carefully controlled laboratory settings, usually with college undergraduates as the subjects. This is important, because there is evidence that experience can reduce or eliminate biased thinking. For example, athletes demonstrate much better statistical intuition when a problem is presented using a sporting example, as compared to when the same problem is presented in a less familiar context. (Nisbett 1983). Similarly, a classic puzzle used to demonstrate confirmation bias involves asking participants to prove the rule “if a card has a vowel on one side, it has an odd number on the other side.” In this abstract, non-intuitive example, people frequently demonstrate confirmation bias. However, if you present people with the exact same problem using a real world example (“prove that if a person is drinking beer, that person must be over 18 years of age”), participants perform almost perfectly. (Klayman 1995) Therefore, we should not automatically assume that the biases described in laboratory settings generalize to expert clinicians. (Norman 2017)

Studies in medicine are (thus far) underwhelming

The true incidence and impact of cognitive biases in medicine is unknown. The evidence is incomplete and imperfect. According to one meta-analysis, the majority of studies looking at cognitive bias in medicine did not take place in real clinical scenarios, but instead employed paper based or simulated vignettes, often done by trainees, and therefore may not generalize well to clinical practice. (Saposnik 2016) Studies that have attempted to examine bias in clinical settings have generally been retrospective, and focused on known misdiagnoses rather than all clinical decisions. Therefore, the results will be skewed by significant hindsight bias and selection bias.

Attempting to classify medical bias retrospectively is fraught with problems. When assessing cases, experts frequently disagree about which biases might be present. When looking at the same case, experts are twice as likely to identify biases if they are told the clinician chose the wrong diagnosis, a clear indication of hindsight bias. (Zwaan 2017) Similarly, whether or not physicians believe an error has occurred is heavily influenced by the patient outcome. (Caplan 1991)

There seems to be a general consensus in medicine that diagnostic errors are more likely to result from cognitive errors than knowledge deficits. However, the evidence for this claim is somewhat unconvincing. The most frequently cited study – a survey by Graber and colleagues (2005) – is a retrospective analysis of 100 cases of known diagnostic error. They state that knowledge deficits were only involved in 4 cases, whereas faulty synthesis of data (such as premature closure) was involved in the vast majority. However, it is almost impossible to distinguish premature closure from a scenario in which a diagnosis was not considered because it was unknown to the clinician, or because a known disease presented in an unknown way (in other words, from a knowledge deficit). In fact, knowledge deficits (whether medical or statistical) could explain a lot of decisions that appear to be affected by bias. Thus, knowledge deficits may be an underestimated cause of diagnostic error. (Norman 2010) Furthermore, addressing knowledge deficits the best technique we currently have to improve medical decision making. 

That being said, considering the sheer number of decisions we make in medicine, and the large number of possible biases, it is likely that these biases play an important role in medical error. Assuming that our decisions are impacted by these biases, the more important questions are how and if we can prevent these errors. 

Unfortunately, the evidence that biases can be mitigated in medicine is mixed, with the bulk of the trials showing no benefit. There are a few trials that demonstrate improved diagnostic accuracy from traineses on paper based vignettes when more time is taken for reflection. (Mamede 2008; Hess 2015) However, Sherbino and colleagues (2012) actually demonstrated more errors when trainees were instructed to slow down and be thorough, and numerous other studies have demonstrated no difference in accuracy between clinicians instructed to work rapidly and those instructed to work slowly and thoroughly. (Ilgen 2011; Ilgen 2013; Norman 2014; Monteiro 2015) 

Three studies looked at educational interventions designed to improve diagnostic thinking by educating students about cognitive biases (meta-cognition). Another study attempted to used a cognitive debiasing checklist, with questions such as “did I consider the inherent flaws of heuristic thinking?” None of these interventions have resulted in improved accuracy. (Sherbino 2011; Shimizu 2013; Sherbino 2014; Smith 2015)

Considering the potential extent of the problem, there has been relatively little research into potential solutions. The failures thus far are a sobering reminder of the complexity of human cognition. We should probably be skeptical of overly simplistic solutions. Our training as medical experts spans many years, and our training in critical thinking (whether formal or informal) started many years before that. It is doubtful that simple instructions to “think about our thinking” will be enough to change the momentum of our ingrained strategies.

However, I don’t think these early failures should dissuade us. You wouldn’t decide that a child has no musical ability after only a month of piano lessons, but our early attempts at teaching cognitive debiasing look a lot more like that month than 10,000 hours of deliberate practice. We need more research, and we need to find ways to train doctors to use their cognitive resources efficiently and effectively.

Recognizing potential harms

Although improving medical decision making seems like a clear win, I think it is important to consider the potential harms of applying cognitive theory in medicine. The most obvious harm is opportunity cost. Thus far, there is no evidence that cognitive debiasing techniques improve decision making or patient outcomes. Time is a precious resource in medicine. If cognitive theory does not improve outcomes, the time and effort required to create curricula, teach, and learn this new material could better used elsewhere.

Likewise, in eschewing rapid heuristics and promoting slow analytic thought, debiasing techniques are likely to make the practice of medicine less efficient. This inefficiency would be worthwhile if it translates into better decisions. However, to date there is no evidence that these debiasing techniques are effective, so the inefficiency is just inefficient. In a worst case scenario, attempts to use slower analytic thinking in medical emergencies could result in delays to critical interventions and bad patient outcomes.

Attempts to avoid cognitive biases could also result in substantial costs. Confirmation bias tells us to focus on ruling out alternatives, rather than searching for confirmatory evidence. However, there are always numerous potential alternative diagnoses. If the solution to confirmation bias is understood as requiring tests to rule out each of those alternatives, the result could be significant increases in testing, costs, and harms to our patients. 

A more subtle harm is the potential for attempts at debiasing to actually increase error. Many of the described biases exist at opposite ends of a spectrum. Avoiding one may cause us to necessarily commit the other. For example, the chapter on base rate neglect reminds us to consider the base rate whenever we make diagnostic decisions. Rare conditions are rare, and shouldn’t be pursued frequently. However, in avoiding the workup of rare conditions, we are falling into another cognitive bias: the zebra retreat. Rare conditions, although rare, do happen, so need to be worked up. The solution to one bias necessarily leads us towards another. 

Summary

Although there seems to be little doubt that cognitive biases play some role in medical error, the extent of their impact is not clear. Most importantly, it isn’t clear if these biases can be prevented, and if so, how. Thus far, attempts to mitigate cognitive biases through educational programs in medicine have mostly failed, although the research has been quite limited thus far. It is also important to acknowledge that many of the processes described as biases are really heuristics that are frequently used to efficiently and accurately arrive at a correct diagnosis. When attempting to improve our cognition, we need to be careful not to throw the baby out with the bathwater. 

How should the practicing clinician proceed? As we are used to with most scientific reviews, the conclusion is: more research is needed. I am reassured by evidence that more experienced physicians are less prone to bias than trainees. (Feltovich 1984) It is likely that we can teach ourselves to be more effective thinkers, but we are a long way from understanding the full impact of these biases on medical practice, and more importantly the techniques that may help prevent them. In the meantime, astute clinicians will endeavour to learn about these biases, attempt to identify specific areas of cognitive reasoning that might be improved, and, most of all, remain humble in their clinical reasoning. 

References

Caplan RA, Posner KL, Cheney FW (1991) Effect of outcome on physicians’ judgments of appropriateness of care. JAMA 265:1957–1960.

Croskerry P (2005) Diagnostic Failure: A Cognitive and Affective Approach. In: Henriksen K, Battles JB, Marks ES, Lewin DI, eds. Advances in Patient Safety: From Research to Implementation (Volume 2: Concepts and Methodology). Rockville (MD): Agency for Healthcare Research and Quality (US); 2005 Feb.

Dhaliwal G. Premature closure? Not so fast (2017) BMJ Qual Saf 26(2):87-89.

Eva KW, Norman GR (2005) Heuristics and biases – a biased perspective on clinical reasoning. Med Educ 39(9):870-872.

Eva KW, Norman GR (2005) Heuristics and biases – a biased perspective on clinical reasoning. Med Educ 39(9):870-872.

Feltovitch PJ, Johnson P E, Moller JH, Swanson DB (1984) The role and development of medical knowledge in diagnostic expertise. In W. Clancey & E. H.

Graber ML (2005) Diagnostic error in internal medicine. Arch Int Med 165:1493–9.

Hess BJ, Lipner RS, Thompson V, et al (2015) Blink or think: can further reflection improve initial diagnostic impressions? Acad Med 90:112–18.

Ilgen JS, Bowen JL, Yarris LM, Fu R, Lowe RA, Eva K (2011) Adjusting our lens: Can developmental differences in diagnostic reasoning be harnessed to improve health professional and trainee assessment? Acad Emerg Med 18(suppl 2):S79–S86.

Ilgen JS, Bowen JL, McIntyre LA, et al. (2013) Comparing diagnostic performance and the utility of clinical vignette-based assessment under testing conditions designed to encourage either automatic or analytic thought. Acad Med 88:1545–1551.

Kahneman, D (2011) Thinking, fast and slow. New York, NY, US: Farrar, Straus and Giroux.

Klayman J (1995) Varieties of Confirmation Bias. Psychology of Learning and Motivation 32:385-418.

Koehler JJ (2010)The base rate fallacy reconsidered: Descriptive, normative, and methodological challenges. Behav Brain Sci 19(1):1-17.

Mamede S, Schmidt HG, Rikers RM, Penaforte JC, Coelho-Filho JM (2008) Influence of perceived difficulty of cases on physicians’ diagnostic reasoning. Acad Med 83:1210–1216.

Monteiro SM, Norman G (2013) Diagnostic Reasoning: Where We’ve Been, Where We’re Going. Teaching and Learning in Medicine 25(sup1):S26-S32.

Monteiro SD, Sherbino JD, Ilgen JS, et al (2015) Disrupting diagnostic reasoning: Dointerruptions, instructions, and experience affect the diagnostic accuracy and response time of residents and emergency physicians? Acad Med 90:511–517.

Nisbett RE, Krantz DH, Jepson C, Kunda Z (1983) The use of statistics in everyday inductive reasoning. Psychological Review 90:339-363.

Norman G, Sherbino J, Dore K (2014) The etiology of diagnostic errors: A controlled trial of system 1 versus system 2 reasoning. Acad Med 89:277–284.

Norman GR, Eva KW (2010) Diagnostic error and clinical reasoning. Medical Education 44(1):94-100.

Norman GR, Monteiro SD, Sherbino J, Ilgen JS, Schmidt HG, Mamede S (2017) The Causes of Errors in Clinical Reasoning. Academic Medicine 92(1):23-30.

Saposnik G, Redelmeier D, Ruff CC, Tobler PN (2016) Cognitive biases associated with medical decisions: a systematic review. BMC Med Inform Decis Mak 16(1):138.

Sherbino J, Yip S, Dore KL, Siu E, Norman GR (2011)The effectiveness of cognitive forcing strategies to decrease diagnostic error: An exploratory study. Teach Learn Med 23:78–84.

Sherbino J, Dore KL, Wood TJ, et al (2012) The relationship between response time and diagnostic accuracy. Acad Med 87:785–791.

Sherbino J, Kulasegaram K, Howey E, Norman G (2014) Ineffectiveness of cognitive forcing strategies to reduce biases in diagnostic reasoning: A controlled trial. CJEM 16:34–40.

Shimizu T, Matsumoto K, Tokuda Y (2013) Effects of the use of differential diagnosis checklist and general de-biasing checklist on diagnostic performance in comparison to intuitive diagnosis. Med Teach 35:e1218–e1229.

Smith BW, Slack MB (2015) The effect of cognitive debiasing training among family medicine residents. Diagnosis 2:117–121.

Tversky A, Kahneman D (1974) Judgment under Uncertainty: Heuristics and Biases. Science 185(4157):1124-1131.

Zwaan L, Monteiro S, Sherbino J, Ilgen J, Howey B, Norman G. (2017) Is bias in the eye of the beholder? A vignette study to assess recognition of cognitive biases in clinical case workups. BMJ Qual Saf. 26(2):104-110.


Cite as: Morgenstern, J. We can’t escape bias. In Raz M, Pouryahya P (Eds). Decision Making in Emergency Medicine. Singapore. Springer Singapore; 2021.

Leave a Reply

3 thoughts on “Decision Making in Emergency Medicine: We can’t escape bias”

%d bloggers like this: