My transition from medical student to practicing diagnostician was marked by one key realization: doctors don’t make definitive diagnoses. Many think that we do. Our patients are certainly under that illusion. But even at the best of times, the physician’s job to to determine the probability of disease.
We all inherently understand this. It is embedded in our discharge instructions: “if anything changes, come back to the emergency department.” But why would anything change, if we already know THE diagnosis? If this questions seems silly, it is because we have all internalized the uncertainty of medicine. We know that what is clearly a viral illness now could easily turn out to be early sepsis or pneumonia.
Unfortunately, the inherent uncertainty of medical diagnosis is easily obscured by disease labels. Tell a patient that he has gastroenteritis, and the diagnosis is made. Tell a doctor the same thing, and she will still re-examine the belly the next day to rule out appendicitis.
This is why emergency physicians are taught to make diagnoses like “chest pain not yet diagnosed” instead of “costochondritis”. The pain might seem inflammatory, but costochondritis just sounds too certain. We want to use terminology that conveys the inherent uncertainty of our diagnosis to the patient.
However, vague terminology like “shortness of breath NYD” is also problematic. Most of the time, I have a (highly) educated guess about the diagnosis. It would be a disservice to both the patient and the rest of the health care team for me to ignore my diagnostic training. The label “SOB NYD” helps no one. The label “congestive heart failure”, even if uncertain, helps guide the patient’s care.
We seem to be stuck between two extremes. “Shortness of breath NYD” is too vague; “congestive heart failure” too specific. Both labels mask the nuance and probability of diagnostics. I think we need a better option.
Why is this important? Imagine the last few patients you admitted to hospital with a diagnosis of “congestive heart failure”. Sometimes, the diagnosis is almost certain. They have a history of CHF, orthopnea, PND, no other respiratory conditions, B lines on ultrasound and xray, crackles, and an elevated JVP. Other patients are less clear. They might have a history of both COPD and CHF, with a combination of wheeze and crackle on exam, and non-diagnostic imaging. After a few hours, you decide that CHF is the most likely diagnosis, but you are far from certain.
Both of these patients will have the same admission diagnosis written on the chart. Both of those patients will leave the department with the same label. The nursing team will be told a patient is being admitted with CHF. The RT called at 3am will be told that both patients have CHF. The covering physician, as well as the team that assumes care the next day, will both see the diagnosis of CHF. But these two patients are not the same.
In writing this single diagnosis on the chart, all of your diagnostic expertise has been lost. For one patient, you were almost certain of the diagnosis, and you could have told the RT called at 3am as the patient deteriorates that CPAP or furosemide was the necessary treatment. However, for the other patient, the diagnosis was uncertain, and if they are deteriorating in the middle of the night, the most appropriate intervention might be a repeat physical exam and further testing. Unfortunately, that information is lost behind a single diagnostic label.
There must be a better way.
What if we used modifiers to indicate our level of certainty about a diagnosis? So that a patient who I am 99% sure is short of breath due to CHF receives the diagnosis of “CHF 99%” whereas a patient who I think might have CHF, but for whom multiple other diagnoses are still possible, might get the diagnosis “CHF 55%”.
A probabilistic notation would immediately help inpatient teams. It would allow them to use our diagnostic acumen, rather than trying to read our minds or restarting the diagnostic process. It would guide care overnight, when the physician is harder to reach. Perhaps, it would even empower members of the interdisciplinary team (who spend much more time with the patients than physicians do) to voice their observations, because it is now clear to them the diagnosis is uncertain.
Likewise, a probabilistic notation would probably help our outpatient teams. I can imagine my orthopedic surgeons triaging patients based on whether I thought the patient had an “ACL tear 90%” versus a “knee effusion, ACL tear 10%”. The time frame for follow up with rheumatology might be different for “temporal arteritis 99%” versus “temporal arteritis 5%”.
I know I would love to see probabilistic notations on patients I am assessing in the emergency department. Imagine you are seeing a patient who is followed by neurology for her headaches. You are seeing her at 3am and don’t have access to her old notes, but her neurologist has told her that she has migraines. Wouldn’t it be nice to know if this was a definitive diagnosis (“migraine 100”) or a provisional diagnosis (“migraine 60”)? Similarly, I see a lot of patients for repeat antibiotics for “cellulitis”. Wouldn’t it be great to know if your colleague was certain that this was infective (“cellulitis 99”) as opposed to just being cautious in a patient with chronic venous stasis (“cellulitis 10 venous stasis 90”)?
This exercise could also make us better clinicians. To start, these probabilities would likely be notations of our subjective gestalt. However, the act of writing down a probability might cause us to question how we arrived at the number. So rather than just writing “Salter-Harris 1 fracture 25%” for a child with tenderness but normal x-rays, I might decide to look into the actual base rate of the disease. The discovery that the true rate of Salter-Harris 1 fractures based on MRI is only 3% might change my practice.1 [The possibility that none of these Salter 1 injuries are clinically important is another issue altogether.]
There is generally pressure on emergency physicians to make a diagnosis. It is very difficult to admit a patient without having a provisional diagnosis. Similarly, discharged patients want to know what you think is going on. In theory, that provisional diagnosis is fine. In theory, we understand that it is provisional and probabilistic. But in practice, provisional diagnoses quickly become permanent diagnoses.
As emergency physicians, we are frequently blamed for misdiagnoses. These get labelled as errors, but calling a change in a provisional diagnosis an error is wrong. It misrepresents what emergency medicine is about. We work with limited information. Our job is to come up with a best guess, and for the most part, we do an excellent job of it. We take limited information and transform it into a provisional diagnosis that allows us to start empiric therapy. Unfortunately, the act of transcription into the chart has a way of transforming a provisional diagnosis into the final diagnosis.
Medicine is a science of uncertainty and an art of probability”
Sir William Osler2
These are just some initials thoughts. I would not want this taken too far. The act of putting a number on our diagnoses might backfire and make them seem more certain than they really are. Nor should we start quibbling among ourselves whether a diagnosis actually has a 90% or an 88% of being true.
My current solution in inelegant. I am hoping someone out there can suggest a better way. Whatever the solution, we need to embrace the role of probability in all medical diagnoses.
References
- Boutis K, Plint A, Stimec J. Radiograph-Negative Lateral Ankle Injuries in Children: Occult Growth Plate Fracture or Sprain? JAMA pediatrics. 170(1):e154114. 2016. PMID: 26747077
- Bean RB, Bean WB. Sir William Osler: Aphorisms from his Bedside Teachings and Writings. New York: H. Schuman; 1950
Morgenstern, J. Communicating diagnostic uncertainty, First10EM, October 31, 2016. Available at:
https://doi.org/10.51684/FIRS.3406
14 thoughts on “Communicating diagnostic uncertainty”
I absolutely love this idea! So many times I want to add a comment to a diagnosis (e.g. severe knee sprain, at least I think so but cannot rule out a torn cruciate ligament just yet) but that is cumbersome and awkward. Adding a number of estimated probability would be immensely helpful, especially for other people involved in the care of the same patient. I just might start doing this today 🙂 PS: I am an emergency veterinarian and your suggestion might even be more appropriate for me than for MDs because we deal with even more uncertainty due to the lack of available diagnostics.
Thanks for the comment. Absolutely agree this applies to any clinical work.
Your comment brought another thought to mind: In a lot of cases, this is a design problem. On my emergency charts, I have an entire blank page to write the history, but only a tiny box to write the “final discharge diagnosis”. There is simply no room to elaborate, so although I generally discussed the entire differential diagnosis and my degree of uncertainty with my patients, none of that information is adequately represented on the chart.
Exactly, same here.
Outstanding article. I think much of the stress in emergency medicine is an inability to accept diagnostic uncertainty.
Thank you. Rather than %ages, what do you think about putting a contributing comorbidy/problem list or a ddx list?
Thanks for the comment.
I think listing the differential diagnosis is important, but a simple list loses the diagnostic acumen of the initial physician. There a lot of patients who have both CHF and COPD on their differential, but for some I am very sure it is CHF at the time of admission, but for others I have no clue. That is important information that we currently lose.
Great post Justin!
Uncertainty is the essence of our work.
Indeed, we ask for probabilities to our students/fellows; why don’t we start from ourselves?
Thanks again for your smart reflections.
I’m happy to have discovered this article.
To the annoyance of our administrative clerks, I always write qualifiers in my provisional discharge dx (or admission dx) box. For example: ‘probable CHF’ or ‘likely AECOPD, r/o PE’. I also write a list of diagnosi: For example: ‘1.AECOPD 2.possible sepsis 3.ARF 2nd likely dehydration’ 4.r/o lung CA’
That style isn’t for everybody, but I prefer those qualifiers when appropriate. As you stated, whatever you give as admission dx becomes almost set in stone sometimes.
Great article Justin!! I like to write an impression and plan at the bottom of my chart to give people an idea of my thought process. For example in the case of a swollen leg – Impression = Most likely venous stasis, U/S rule out DVT, no concerns re: cellulitis. That way the physician following up the imaging knows what I was thinking.
I think sometimes physicians are worried that if they document their thought process that it makes them more vulnerable if they’re wrong. I disagree. I think it shows that you are using your diagnostic skills, and if the patient presents again with a change in clinical picture it gives subsequent clinicians assessing the patient more context for why you arrived at your original diagnosis.
Thanks for the comment Val.
I agree that documenting your thought process is always a good idea, even if you turn out to be wrong. I tend to dictate my thought process at the end of my note, but there isn’t much space in the “final diagnosis” box in our paper charts, so I worry that information often gets lost as the patient moves along. (Especially as my dictated note might not be transcribed before an admitting doctor sees the patient.)
Before going into emergency medicine, I worked for the intelligence community as an analyst. They had the same problems with communicating “analytic” uncertainty there, too. I participated in numerous heated battles with how to effectively hedge our assessments. When I transitioned to academia (again, prior to going into medicine), I even supported a post-doc researcher studying this problem. In the end and after all the efforts of numerous researchers, there is still no established approach other than to fully document your thought process and rationale for the assessments you make. The challenge there is getting readers to see past the assessment and read the fine print.
There have been numerous efforts to establish a lexicon of hedge words to communicate analytic uncertainty. The intelligence community for awhile used phrases like “With High Confidence, this country has this capability.” Other efforts have been made to both hedge an assessment AND assign a range of probabilistic uncertainty. For example, “Possible COPD [0, 30%]” or “Likely CHF [75, 90%].” I trialed such an approach as this but found that regardless of the phrase used, people would further hedge their meanings with wide intervals, e.g., “Very Likely CHF [25, 100%]”. There is a whole world of research out there describing the ways in which uncertainty SHOULD BE communicated, but whether these techniques CAN BE adopted in practice is a whole different matter.
I have been away from this research area for awhile, and your article has no inspired me to revisit this topic. Thanks for that. I am quite tempted to dig out my old research notes and writing to see how I can apply this to my present work.
Thanks for the comment.
It is fascinating to consider how this same problem has been tackled elsewhere. I had not thought about the intelligence community at all, but in retrospect it is pretty obvious that this is not a problem that is isolated to medicine. In fact, we might be better at discussing uncertainty than a lot of experts (thinking specifically about political and financial predictions right now.)
I would love to see more research in this area. Please let me know if you do get anything going.