Clinician gestalt is one of those terms that people either love or hate. Lovers will point out how, almost every time that it is studied, gestalt is at least as accurate as validated clinical decision tools. Haters will lament the inclusion of gestalt in decision tools like the Well’s score, worrying that it invalidates the entire process, because you can basically make the score say whatever you want. Love it or hate it, considering the frequency with which we have to make diagnostic decisions in emergency medicine, we will be grappling with gestalt for decades to come.
So how good is gestalt in the diagnosis of acute coronary syndrome? A lot of people are already talking about this paper. Honestly, I don’t find the results all that exciting. I think it basically tells us what we already knew. It shouldn’t change anyone’s practice. However, I worry that the headlines will be misinterpreted in ways that could ultimately harm our patients. So let’s take a quick look to ensure we all understand what this study really shows.
Oliver G, Reynard C, Morris N, Body R. Can emergency physician gestalt “rule in” or “rule out” acute coronary syndrome: validation in a multi-center prospective diagnostic cohort study. Academic emergency medicine. 2019; PMID: 31338902 [article]
This is a preplanned secondary analysis of the BEST study, a prospective multicenter observational diagnostic study from 18 hospitals in the United Kingdom.
Adult patients with suspected cardiac chest pain peaking in the last 12 hours.
Clinician gestalt for ACS as measured on a 5 point scale: “definitely not ACS”, “probably not ACS”, “could be ACS”, “probably ACS, and “definitely ACS”. Clinicians were not blinded to the initial ECG or troponin result when making this determination.
The primary outcome was ACS at 30 days, which included acute MI, death, but also revascularization (which, as I have discussed before, is probably not an important cardiac outcome).
There were 1613 patients included in the trial. 207 (14.9%) were given a diagnosis of MI, and another 33 had MACE at 30 days, making the total “rule in” rate 17.3%.
Gestalt was reasonably well correlated with ACS. The number of patients who ruled in for MI rose from 5% in the “definitely not” group to 63% in the “definitely” group.
|Gestalt||Definitely not||Probably not||Could be||Probably||Definitely|
|Number with MI||5%||5%||13%||27%||63%|
The clinical gestalt of “definitely not” was actually very accurate, with a sensitivity of 98.8%, but because of the high rule in rate in this group, that only translates to a negative predictive value of 95.0%. Only 4% of the total population fell into this group.
If you add ECG and the first troponin to the clinical gestalt of “definitely not” you get to 100% sensitivity and 100% negative predictive value, but that only applies to 4% of total population.
The clinical gestalt of “definitely” was also quite accurate, with a specificity of 98.5% and a positive predictive value of 71.2%.
So what do these results tell us about the value of gestalt in the workup of chest pain patients? Clearly, gestalt correlates pretty well with clinical outcomes. However, neither the positive predictive value nor the negative predictive value are good enough to “rule in” or “rule out” disease. You could simplify those results into a sound bite that sounds something like “gestalt is not helpful to either rule in or rule out ACS”. Unfortunately, that is the sound bite that I think is circulating the internet, and I think it is clearly wrong.
Before we get to the major reason that it would be wrong to discount gestalt based on this study, I want to mention a couple little things. First of all, a sensitivity of 98.8% for “definitely not” is excellent. That is as good as almost any test we have in medicine. The negative predictive value doesn’t look great in this population, but the negative predictive value is based on the prevalence of the disease being studied. With a rule in rate of 17%, this is a high risk cohort. If you replicated this exact study in North America, where the rule in rates are much lower, the negative predictive value would be much better.
Similarly, although 70% is not a great positive predictive value, it isn’t bad, and the specificity of 98.5% is excellent. I bet most of us would want to admit a patient who had an 70% chance of having an MI. For the purposes of the emergency doctors, 70% might be good enough. (Especially when you know we shouldn’t be giving more dangerous medications like heparin to these patients, so the risks of a false positive are relatively minor.)
But let’s get to the meat of the issue. I worry that people will interpret the sound bite “gestalt is not good enough to rule out ACS” as meaning that all patients with chest pain need ECGs and blood work. That is clearly wrong. That is not what this study says.
When I teach critical appraisal, the first question I always ask is: “are the people in this study like my patients?” My patients, in the context of using clinical gestalt to rule out ACS, are the patients who I don’t think have ACS. Who are the patients in this study? Their inclusion criteria was “adult patients with suspected cardiac chest pain”. By definition, none of the patients included in this study could possibly have a low gestalt for ACS, because in order to get into the study, the clinician had to think you had a chance of ACS. The clinicians who ranked patients as “definitely not” having ACS were actually contradicting themselves.
To be fair, they were asked about their gestalt after the ECG and first troponin was done, so I guess it is reasonable for them to update their gestalt. Considering the information available, this study was really asking “what is your gestalt that the second troponin will come back positive or that the patient will have a MACE by 30 days?”
Can gestalt be used to rule out ACS? Of course it can. They actually did so in this study. Every patient that wasn’t included in the study because their chest pain wasn’t suspected to be cardiac in origin was ruled out by gestalt. Of course, we don’t know how accurate they were. Perhaps they excluded a bunch of people with MIs. The point is, in order to answer the question “can gestalt be used to rule out ACS”, we need to look at the patients this study excluded, not those that were enrolled.
It is important to note that the authors are clear about this limitation in their discussion. (You can see some comments from the senior author on Twitter here.) My concern is more about the potential misinterpretation of this paper than with the paper itself, although I think choice of title and the frequency with which they talk about using gestalt to rule out ACS in a population never intended for rule out facilitate that misinterpretation.
Will this study change my practice in any way? Probably not. I currently exclude multiple chest pain patients by gestalt every shift. I will continue to do that. If I think a patient’s chest pain is potentially cardiac, I am clearly going to get further testing. I guess the only question is whose opinion counts? Patients are often referred into the emergency department for a cardiac rule out when my gestalt is “definitely not ACS”. Because some clinician out there thought this chest pain might be cardiac, such patients may have been enrolled in this trial, and perhaps my gestalt is not good enough in those cases. (But to be honest, how often are you overruling a patient’s primary care doctor’s request for a cardiac rule out?)
Don’t believe the rumours you might hear. Clinical judgement is enough to rule out ACS for many patients in the emergency department. Just don’t contradict yourself and try to use gestalt to rule out ACS in patients where your gestalt is that ACS is a possibility. That would be silly.