This is Part 2 of a 4 part series.
- Part 1: A brief overview of cognitive theory
- Part 3: Possible solutions
- Part 4: Problems with cognitive theory
This post will review the common cognitive errors described in medicine. You will notice that this list is not clean. Human cognition is a complex process. Many of the biases overlap. Some are more general descriptions that encompass other more specific examples. Often, two different biases will represent opposite ends of a cognitive spectrum, both ends of which can result in errors. This list represents the cognitive biases that are most often described in the context of medical errors, but there are many other cognitive biases that affect our daily lives. For example, I particularly like the IKEA effect: our tendency to disproportionately value objects we had a hand in putting together, regardless of end result.
Affective error (aka outcome bias, value bias, the chagrin factor)
This is the tendency to convince yourself that what you want to be true is true, instead of less appealing alternatives. For example, if you see a friend with a headache, you are more likely to opt for a benign diagnosis than subject them to a lumbar puncture to rule out subarachnoid hemorrhage. Similarly, when we dislike a patient, we may write off her shortness of breath as anxiety instead of considering pulmonary embolism. Countertransference is a subset of affective error.
Aggregate bias (aka ecological fallacy)
The belief that aggregate data, such as the data involved in the validation of clinical decision instruments, does not apply to the patient in front of you. This can lead to errors of commision, such as an increased CT usage when decision instruments such as PECARN are ignored.
We have a tendency to select options (or make diagnoses) for which the probability is known, instead of selecting options for which is probability is unknown. For example, a patient may present with fever and joint pains after a cruise in the Caribbean. You consider influenza, but also remember hearing about Chikungunya. However, you don’t really know how common Chikungunya is and don’t have a test available to confirm it, so you end up favoring the diagnosis of influenza (whether or not it is actually more likely.)
Prematurely settling on a single diagnosis based on a few important features of the initial presentation and failing to adjust as new information become available. This is closely related to, and made worse by, confirmation bias.
Diagnosis momentum: Similar to anchoring. Once a diagnostic label has been assigned to a patient by another individual, it is very difficult to remove that label and interpret their symptoms with fresh eyes.
When your thinking is shaped by prior expectations. In other words, you see what you expect to see. This is the umbrella category that contains stereotyping and gender bias. For example, a homeless patient with past drug abuse is found unconscious and it is assumed that he has overdosed, when in fact he has severe hypoglycemia.
The tendency to judge the likelihood of a disease by the ease with which relevant examples come to mind. Recent experience with a particular diagnosis increases the chance that the same diagnosis will be made again. The opposite is also true, so that a diagnosis that hasn’t been seen in a long time is less likely to be made. In general, this will lead to rare diseases being underdiagnosed and common diagnoses being overdiagnosed. For example, in the middle of flu season, if is incredibly easy to diagnose every patient with shortness of breath as having the flu, potentially missing a subtle pulmonary embolism.
“Recent case bias” or “significant case bias” are subtypes of the availability bias. Rather than the most common diagnosis being the one that comes to mind, a rare diagnosis that was seen recently or that has a significant impact on you (for example, a miss that resulted in a lawsuit) dominates the differential. After catching an aortic dissection in a patient that presented with isolated leg pain, you might order more CT scans in individuals with soft tissue injuries.
Base rate neglect
The failure to incorporate the true prevalence of a disease into diagnostic reasoning. For example, we often overestimate the pre-test probability of pulmonary embolism, working it up in essentially no risk patients, skewing our Bayesian reasoning and resulting in increased costs, false positives, and direct patient harms. The standardly taught “worst first” mentality in emergency medicine is a form of base rate neglect, in which we are taught to consider (and sometimes work-up) dangerous conditions, not matter how unlikely they are.
The tendency to accept or reject data based on one’s personal beliefs. For example, an individual may be a true believer in the tPA for ischemic stroke, and therefore rejects any evidence that would contradict their belief.
Blind spot bias
We often fail to recognize our own weaknesses or cognitive errors, while it is much easier to recognize the errors or weaknesses of others. A related bias is the Dunning-Kruger effect, which describes the tendency for unskilled individuals to overestimate their abilities, although highly skilled individuals tend to underestimate their abilities. For example, almost everyone claims to be a better than average driver, but obviously half the population must actually be worse than average. Are you better at communicating with your patients than the average doctor? What do you think the rest of your department thinks about themselves?
Commission and omission biases
Commission: The tendency towards action rather than inaction
Omission: The tendency towards inaction rather than action
We all have these, but often employ them in the wrong settings. When working up low risk patients, we tend to make errors of commision by over-ordering tests when we would be better off doing nothing. In resuscitation, we often find ourselves hesitant to act. The baseline state we should probably strive for is commission in resuscitation and omission otherwise.
Once you have formed an opinion, you have a tendency to only notice the evidence that supports you and ignore contrary evidence. For example, a patient might present with a throbbing unilateral headache, photophobia, and nausea that makes you think about migraines. You may hear that there is a family history of migraines, but unconsciously discount the fact that the patient described the onset as a thunderclap.
Attempting disconfirmation is an essential scientific strategy. We all know you can not prove the statement “all swans are white” just by observing white swans, because no matter how many you observe, the next one might prove you wrong. However, searching for the single black swan will allow you to definitely prove that “all swans are not white”. To translate this into medicine, when seeing an obese patient with burning retrosternal chest pain, we shouldn’t be seeking evidence that might confirm that this is GERD, but rather we should be trying to disconfirm that theory (by looking for ACS.)
A factor that can reinforce other diagnostic errors that is particularly common in emergency medicine. The idea is that there may be a significant time delay until one sees the consequences of a cognitive error, or they may never see that consequence at all, and therefore behavior is reinforced. For example, we are criticized heavily if we miss a diagnosis, but we never see the results of increased CT usage (there is feedback sanction in that any cancers caused will not be identified for decades), therefore we are biased towards more CT usage.
Your decisions are affected by how you frame the question. For example, when deciding whether to order a CT, it matters whether you consider the 1/100 chance of missing a deadly condition or the 99/100 chance the patient is fine.
Similarly, your decisions are influenced by the context in which the patient is seen and the source of the information. You are more likely to miss a AAA in a patient you are seeing in the ambulatory zone than if you were to see the exact same patient in a resuscitation room.
Fundamental attribution error (eg negative stereotyping)
An overweighting of an individual’s personality as the cause of their problems rather than considering potential external factors. In other words, we tend to blame patients for their illnesses. For example, we tend to blame obese people rather than consider the social and economic factors that drive obesity. Similarly, if you hear about a doctor missing an MI, you have a tendency to think the physician must have done something wrong, rather than consider the context of diagnosis in the emergency department and difficulty of widely varied clinical presentations.
The erroneous belief that chance is self correcting. For example, if an individual flips a coin and gets heads 10 times in a row, there is a tendency to believe that the next flip is more likely to be tails. In the emergency department, one might diagnose 3 patients in a row with pulmonary embolism, and therefore believe that it is unlikely the next patient will also have a PE, despite the fact that the patients are clearly unrelated. This leads to a form of base rate neglect, in which the pretest probability is inappropriately adjusted based on irrelevant facts.
Knowing the outcome can significantly affect our perception of past events. We see this frequently in medicolegal cases, where experts judge the actions of the physician but are influenced by already knowing the outcome of the case.
The tendency to believe that the more information one can gather to support a diagnosis, the better. This can become especially problematic when considering order effects, so that new information is valued higher than information obtained earlier, potentially skewing one’s reasoning.
Order effects (aka primacy, recency)
This refers to the fact that information transfer occurs as a U shaped function. We tend to remember information from the beginning of an encounter and the end of an encounter. This can be related to anchoring, in that we focus on the first thing a patient says and anchor on that information, no matter what other information we are provided with. Order effects are particularly important in transitions of care.
Playing the odds
This is the tendency, when faced with ambiguous presentations, to assume a benign diagnosis. You are relying on the fact that benign diagnoses are common to mitigate the harms of misdiagnosis. This is essentially the opposite of the standard emergency “worst first” mentality. It is also the opposite end of the spectrum of base-rate neglect.
Posterior probability error
The the probability of a diagnosis is overly influenced by prior events. It is the opposite of the gambler’s fallacy. For example, if you diagnose 12 straight patients with muscular back pain, there is a tendency to diagnose the 13th as the same. This is closely related to availability bias.
This is the tendency to stop too early in a diagnostic process, accepting a diagnosis before gathering all the necessary information or exploring all the important alternatives. This is an umbrella category that can encompass a number of other errors. Essentially any cognitive error could result in the belief we have already arrived at the correct diagnosis and prevent further verification. The idea is “when the diagnosis is made, the thinking stops.”
Representativeness restraint (aka prototypical error)
The tendency to judge the likelihood of a diagnosis based on a typical prototype of the diagnosis. The probability of the disease is based entirely on how closely the the current presentation is represented by that typical prototype. The result is that atypical presentations of diseases are more likely to be missed. “If it looks like a duck and quacks like a duck, it must be a duck”.
Sunk cost fallacy
Once one is invested in something, it is very difficult to let it go, even if that original investment is now irrelevant. In medicine, this can occur when a physician feels intellectually invested in a particular diagnosis. If, after considerable time and energy, a physician arrives a one diagnosis, it can be difficult to overlook those efforts (the sunk costs) and re-consider the diagnosis if new data becomes available.
Sutton’s law is based on the story of the bank robber Willie Sutton, who when asked why her robbed banks, replied “because that’s where the money is.” The idea is that we should focus our diagnostic strategy by going for the obvious. This become an error (Sutton’s slip) when possibilities other than the obvious are not given sufficient consideration. For example, the obvious diagnosis for the 10th febrile, snotty, coughing child of the day during flu season is flu, but it would be a mistake not consider other possible causes of the fever.
Triage cueing (eg geography is destiny)
When diagnostic decisions are influenced by the original triage category a patient is placed in. (A form of diagnosis momentum – the triage nurse diagnosed the patient as “not sick”, therefore the patient must not be sick.) There are many forms of triage, from patients self-triaging to different levels of care, to the referrals you make out of the emergency department that cue your consultants based on your assessment.
Ying Yang bias
The belief that a patient cannot possibly have a diagosis because they have already been subjected to a multitude of negative tests. (Ie. they have been worked up the ying-yang.) This is a combination of diagnosis momentum (with the diagnosis being ‘nothing’) and base rate neglect (you overvalue the previously negative tests and assign too low a pre-test probability).
Backing away from a rare diagnosis only because it is rare. Often this is because a physician does not want to develop the reputation for being unrealistic or wasting resources. This occurs along a spectrum with availability bias and base rate neglect. If you are never working up rare diagnoses, that may represent a zebra retreat. However, if you are frequently searching for zebras, that would represent a base-rate neglect and will result in over-diagnosis and wasted resources.
In addition to these specific cognitive biases, there are there are many factors we should be aware of that increase our likelihood of making cognitive errors.
- Cognitive overload
- High decision density
- Interruptions or distractions
- Sleep deprivation (cognitive decision making tends to reach its nadir at 3-4am. Some studies equate cognitive performance at the time with being legally intoxicated.)
- Circadian dyssynchronicity
- Emotional perturbations (affective state)
Next week I will continue with part 3 of this series, outlining some ways that might mitigate these errors.
There are 3 excellent episodes of Emergency Medicine Cases on decision making and cognitive errors:
- Episode 11: Cognitive Decision Making and Medical Error
- Episode 62 Diagnostic Decision Making in Emergency Medicine
- Episode 75 Decision Making in EM – Cognitive Debiasing, Situational Awareness & Preferred Error
Croskerry P. Clinical cognition and diagnostic error: applications of a dual process model of reasoning. Adv Health Sci Educ Theory Pract. 2009;14 Suppl 1:27-35. PMID: 19669918
Croskerry P. Diagnostic Failure: A Cognitive and Affective Approach. In: Henriksen K, Battles JB, Marks ES, Lewin DI, editors. Advances in Patient Safety: From Research to Implementation (Volume 2: Concepts and Methodology). Rockville (MD): Agency for Healthcare Research and Quality (US); 2005 Feb. PMID: 21249816 [Free full text]
Croskerry P. ED cognition: any decision by anyone at any time. CJEM. 2014;16:(1)13-9. PMID: 24423996
Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med. 2003;78:(8)775-80. PMID: 12915363
Croskerry P. From mindless to mindful practice–cognitive bias and clinical decision making. N Engl J Med. 2013;368:(26)2445-8. PMID: 23802513
Groopman, J. (2008). How Doctors Think, Houghton Mifflin Harcourt.
Tversky A, Kahneman D. Judgment under Uncertainty: Heuristics and Biases. Science. 1974;185:(4157)1124-31. PMID: 17835457