Morgenstern, J. Decision Making in Emergency Medicine: Availability Bias, First10EM, March 7, 2022. Available at:
https://doi.org/10.51684/FIRS.125778
The human mind is imperfect. We all make mistakes. We are all susceptible to bias. Although we love to talk about and perform procedures, emergency medicine is really all about making difficult decisions, and so we all need a thorough understanding of how our minds work, how mistakes are made, and hopefully how to avoid them. (There is a prior 4 part series on the topic that can be found here.)Through learning about the various cognitive biases, and identifying some strategies to mitigate common errors, the hope is that we will be able to avoid future mistakes. For that reason, I was thrilled to be invited to take part in the writing of a textbook called “Decision Making in Emergency Medicine”. In each chapter, we discuss a different bias, show it in practice through multiple clinical scenarios, and discuss possible mitigating factors. Although we will never be able to completely avoid error, I think this textbook is essential reading for any emergency clinician.
The publishers of the book have been kind enough to allow me to share a couple of the chapters I wrote for the book to give you a taste of what it is trying to accomplish. This is the chapter on availability bias:
Availability bias
In an ideal world, each diagnosis would follow an algorithm that starts with a pretest probability and then applies the likelihood ratios of each feature of the patient’s history, physical exam, and required tests, to arrive at the most likely final diagnosis. However, that is simply not how the human mind works. There are many ways that this algorithm can fail.
Clearly, a clinician cannot make a diagnosis that she has never heard of. Even when she knows the diagnosis, it has to come to mind while she is assessing the patient. The diagnosis will always be, by definition, one that came to mind. Furthermore, the clinician must understand how important each feature of the presentation is, know the likelihood ratios, and accurately adjust their pretest probability based on the gathered information.
There is some evidence that medical errors are more often the result of cognitive failure than knowledge deficits. (Graber 2005) In other words, the correct diagnosis is likely to come to mind, but we fail to choose it from the menu of available options. One possible reason for such errors is that we do not arrive at a diagnosis by following a mathematical algorithm using likelihood ratios, but instead match the patient in front of us to examples of various diseases we have developed in our minds. We use heuristics to rapidly match the patient in front of us to the many disease patterns we learned throughout our training. The process can go awry when examples of some diseases come to mind easier than others, causing us to overestimate or underestimate the likelihood of the diagnosis.
Definition: Availability bias occurs when a clinician judges the likelihood of a diagnosis based on how easily similar examples come to mind (whether because the diagnosis is seen frequently, a rare diagnosis was seen recently, or a specific case had a significant emotional impact, making it easier to recall). (Tversky 1974; Eva 2005)
Case 1
It has been a long day. You knew it was going to be busy, being a Sunday afternoon in the middle of flu season, but you came in energized and ready to work. Unfortunately, the first words out of your colleague’s mouth upon your arrival were, “do you remember that patient?”
Yesterday, you had seen a young woman with pleuritic chest pain that seemed to be muscular. She had no risk factors, a normal physical exam, and was PERC negative, so you sent her home with some ibuprofen and follow up with her GP. Unfortunately, she was brought back to the emergency department by ambulance this morning in significant distress and a CT scan revealed large bilateral pulmonary emboli (PE).
As you near the end of your shift, the registrar presents a case. “This one should be easy. It is a young female who has some right sided pleuritic chest pain that started after a big coughing spell. She has no prior history of PE or DVT, no leg symptoms, and no risk factors. Her vital signs are normal. The only finding on exam is tenderness to palpation of her right chest that exactly reproduces the pain that brought her in. Her ECG and chest x ray are normal. She is low risk by the Well’s score and PERC negative, so I don’t think she needs any further testing. I am just going to treat her pain, give her good return precautions, and have her follow up with her GP later this week.”
An image of yesterday’s patient immediately forms in your mind. The registrar seems surprised by your tirade on the shortcomings of the PERC score, but agrees to add bloodwork and a CT scan “just to be safe”. You hand the patient over the the oncoming consultant, and head home feeling good that you did not make the same mistake twice. When you check the results 5 days later, you find that the CT was negative, but notice that the patient is still admitted to the hospital. It turns out that there was an incidental mass found on the CT. The surgeons performed a biopsy, which was thankfully negative, but the patient developed pneumonia after the procedure.
The workup of PE is somewhat algorithmic, with clinical decision aids that help us determine a pretest probability, and then a limited number of tests can be added to rule in or rule out the diagnosis. Usually, a negative PERC score in a low risk patient ends the workup for PE. However, in this case, despite being low risk, the patient presented by the registrar closely matched a patient seen the day before by the consultant in whom PE was missed. This missed diagnosis was already on the consultant’s mind throughout the day, and so when a similar patient was presented, PE was moved to the top of the differential diagnosis. Despite using the PERC score to correctly rule out PE in hundreds of patients previously, the recent misdiagnosis is the only patient that came to mind, resulting in an overestimation of the patient’s risk of PE, an unnecessary test, and downstream complications.
Recent presentations, and presentations with a strong emotional component (such as missed diagnoses), are more likely to come to mind, resulting in a misperception of the likelihood of that diagnosis.
Case 2
A child presents with a fever, runny nose, and a rash. The history and physical exam are not concerning. She is acting normally according to her parents. There is no history of vomiting. She is previously healthy and fully vaccinated. There is no travel history or sick exposures. The vital signs are normal, and the child appears well aside from the general features of a URTI: coryza, mild conjunctivitis and some lymphadenopathy. There are no meningeal signs. The rash is a nonspecific, erythematous, blanchable, maculopapular rash that looks viral.
Your instincts are that this is just another viral illness, but you are a little nervous. Three children were recently diagnosed with meningococcal disease in a different area of the country, and meningitis has been all over the news. This was reinforced by an email from public health that you read at the beginning of your shift discussing the possibility of meningitis in children with fever and a rash. It seems very unlikely that this child has meningitis, but it is hard to ignore those dreadful news reports. As you debate the best course of action, a different child pops into your head. During residency, you saw another well appearing child with a rash who rapidly deteriorated in front of you, and ultimately died of meningococcemia. The details were different (that child had not been immunized and had a nonblanchable rash), but you cannot afford to make the same mistake.
After explaining the risk of meningitis to the parents, they agree to a lumbar puncture (they have also been watching the news.) The first attempt fails, and you have to use procedural sedation for the second try. Thankfully, the sample obtained is completely normal.
Two days later the child returns to the emergency department and is diagnosed with Kawasaki disease. In retrospect, she had all the symptoms on your initial assessment, but you were so concerned about meningitis that Kawasaki disease never crossed your mind.
As is illustrated by this case, misdiagnosis is frequently the result of multiple cognitive biases (one study demonstrated an average of 6 cognitive or systemic errors per incorrect diagnosis), and many cognitive biases are closely related. (Graber 2005) In this case, the widespread media coverage of meningitis made the diagnosis seem more likely (availability bias), and made the clinician lose track of the incredible low pretest probability of meningitis in a well appearing, fully vaccinated child with no travel history (base rate neglect). Once meningitis was considered, it was the main focus of the workup (anchoring bias), and the negative workup resulted in false reassurance (premature closure).
Case 3
You have no idea what is going on with this patient. He is a 28 year old man with no major medical problems. He is on bupropion for smoking cessation, but does not take any other medications and, when he is able to speak, denies using drugs. The triage nurse originally placed him in the psychiatric assessment area, because he was rapidly pacing back and forth, not making a lot of sense. Every 15 seconds or so, he grimaces and appears to be in a lot of pain. Occasionally his legs seem to spasm and he will fall to the floor, but he never loses consciousness, and will quickly resume pacing again. It is difficult to get any vital signs recorded, and he is not cooperative with any aspect of the physical exam.
The patient was given an antipsychotic to settle the agitation, but, if anything, the symptoms got worse. Looking at the repetitive grimaces, you wonder if the patient is having seizures. You move him to the resuscitation room, and give multiple doses of intravenous midazolam, but his symptoms remain unchanged 20 minutes later.
The patient is clearly sick, but you are not sure how to proceed. You ask a colleague for help, and she immediately says, “this is extrapyramidal symptoms caused by the bupropion. I had a patient who presented almost exactly like this 2 years ago.” You give the patient a dose of benztropine, and 10 minutes later he is sitting comfortably on the stretcher, talking quietly with the nurse about his attempts to quit smoking.
Unlike the prior examples, the availability of a previous similar patient rapidly led to the correct diagnosis in this case. It is important to remember that considering similar cases does not always lead to errors. More generally, we call this type of reasoning the availability heuristic, and it is often quite useful. (Tversky 1974) It only becomes the availability bias when the limitations of such reasoning are not accounted for. In reality, we often only label it a bias when it results in the wrong answer.
Unfortunately, without the benefit of hindsight, it is often impossible to distinguish appropriate uses of the availability heuristic from potentially dangerous uses of the availability bias. To be effective, clinicians need to embrace the diagnostic benefits of comparing current patients to prior examples, while remaining wary of the potential errors that can result from overreliance on such non-analytic thinking.
Case 4
Midnight shifts on the weekend are always hard, but homecoming weekend has doubled the number of patients. Not surprisingly, the majority of the influx is directly related to alcohol abuse, and you are starting to get a little fed up.
The nurses hands you another chart. “We’ve got another one who is going to need to sleep it off until morning.” There is still a very long list of patients waiting to be seen. You quickly enter the room, are hit by a strong aroma of alcohol, and see a young man snoring on the stretcher. He only responds with a moan when you touch him, but he is maintaining his airway. His vital signs are unremarkable, and his pupils are normal. You do not see any evidence that he has taken anything other than alcohol, and you do not see any obvious signs of trauma. You decide to leave him to sober up, like the 10 other students clogging the department, and ask the nurse to tell you when he is awake so you can talk to him.
6 hours later, the end of the shift is finally in sight. You have dispositioned most of the drunk patients, and just need to finish a few charts while you wait for your relief to arrive. One of the nurses approaches. “Hey doc. Have you had a chance to recheck the kid in room 9? He still isn’t waking up at all. He must have had a ton to drink, or do you think maybe he was on something else?”
Your heart starts to race. How many times have you heard the lecture cautioning against missing trauma in a patient presumed to be drunk? You can clearly picture the horrible CTs from the most recent grand rounds. You quickly order a CT scan of his head. Unfortunately, because of the overnight backlog, there is a delay, but an hour later you breathe a sigh of relief as the images pop up and are completely normal.
But he still isn’t waking up. As the stretcher rolls passed on the way back from radiology, you notice something on his wrist: a medic alert bracelet. He is an insulin dependant diabetic. 8 hours into his ED visit, you finally check his glucose level and it is undetectably low.
In this case, availability bias struck twice. First, the diagnosis of intoxication was made because of the huge number of drunk patients already seen that night. Second, when the possibility of a missed diagnosis was considered, the first thing that came to mind was trauma, based on similar cases presented at grand rounds. In both cases, the availability of a believable diagnosis prevented the clinician from identifying hypoglycemia as the true cause of the patient’s altered mental status.
Conclusion
Availability bias occurs when we overestimate the likelihood of a diagnosis because similar examples are readily available in our memory. Consequently, we will also mistakenly underestimate the probability of diseases that do not come easily to mind. The use of availability is not inherently bad. As was demonstrated in case 3, it can rapidly lead us to the correct diagnosis. Furthermore, common conditions are likely to come to mind more easily, which is a natural way to ensure that we are considering the base rate of disease. However, there are numerous ways that the availability heuristic can lead to biased thinking. Recent cases can weigh more heavily in our thinking, but the diagnosis of a patient is generally completely independent of other recently seen patients. Cases with high emotional impact, such as those with a bad outcome, missed diagnosis, or resulting in a lawsuit, can result individual doctors overpursuing a certain diagnosis, to the detriment of their patients. Conversely, availability bias can result in a rarely seen disease being too easily dismissed.
Potential solutions:
- Routinely consider alternatives. Get in the habit of routinely asking: what else could this be? Physically writing out a differential diagnosis can make gaps more obvious.
- Limit reliance on memory. Using cognitive aids for the differential diagnosis limits the chance that you will miss an important diagnosis just because it does not come easily to mind.
- Meta-cognition: Take time to ask yourself, why do I think this? Might my thinking be overly influenced by cognitive biases?
References
Croskerry P (2003) The importance of cognitive errors in diagnosis and strategies to minimize them. Academic medicine 78(8):775-80.
Eva KW, Norman GR (2005) Heuristics and biases – a biased perspective on clinical reasoning. Med Educ 39(9):870-872.
Graber ML, Franklin N, Gordon R (2005) Diagnostic Error in Internal Medicine. Arch Intern Med 165(13):1493-1499.
Tversky A, Kahneman D (1974) Judgment under Uncertainty: Heuristics and Biases. Science 185(4157):1124-1131.
Cite as: Morgenstern, J. Availability bias. In Raz M, Pouryahya P (Eds). Decision Making in Emergency Medicine. Singapore. Springer Singapore; 2021.
2 thoughts on “Decision Making in Emergency Medicine: Availability Bias”
THANK YOU for these posts! As a paramedic overseeing a CQI process, I am often faced with figuring out how pre-hospital mistakes occur. I am fascinated by the potential of bias and it’s role in mistakes. Besides meta-cognition as a solution – which for me, presents as an opportunity to teach EMS practitioners about the existence of cognitive bias, it seems like Crew Resource Management can often play a role in preventing some of the errors in EMS.