Embedding decision tools into the electronic record

a critical appraisal of a paper looking at embedding decision tools into the electronic record

Last week I picked up a chart at work. It was a 25 year old woman who had sneezed very hard and developed some right sided rib pain. She had waited about 2 hours to see me after being sent in from a walk in clinic with a note asking me to “rule out PE” because the pain was, unsurprisingly, pleuritic. She had normal vital signs. She had an x ray done at the walk in clinic that ruled out pneumothorax. She had no history of or risk factors for VTE. She had a Well’s score of 0. She was PERC negative.

When I discussed her risk and told her that no further testing was necessary to get her below the test-threshold, she was somewhat perturbed. It was a beautiful summer day, and she had just spent half her weekend in waiting rooms. She was happy to be going home, but was somewhat confused as to how two highly trained medical professionals could have such different opinions about the best course of care.

This left me thinking about variation in practice. We all know there is a huge variation in the way that doctors practice. Some people probably would have done a work-up on this patient: a D-dimer, or maybe even a CT. Others would have never sent her to the emergency department in the first place.

Such significant variation in practice can’t possibly be good for patients. This week, as part of the SGEM Hot off the Press series, Ken Milne and I spoke with Dr. Kelly Bookman about her paper in Academic Emergency Medicine, “Embedded Clinical Decision Support in Electronic Health Record Decreases Use of High-cost Imaging in the Emergency Department: EmbED study”. It is an interesting study that embedded various clinical decision rules into the electronic health record (EHR) in order to influence CT ordering. They did have a small impact, with a 6% (relative) decrease in overall CT use. You can hear all the details on the SGEM podcast.

I can hear the emergency medicine world screaming: “Please, no! Don’t add more clicks; I don’t need to spend more time at the computer to initiate necessary patient care.” Appropriate integration of these tools is incredibly important. They need to be easy to use. They need to save physicians time, not slow us down. But I have no doubt that they are the way of the future.

Instead of addressing this specific tool, or the headaches of alert fatigue, I wanted to take a moment to consider how we should be judging these tools as they make their way into emergency practice.

Before we can judge the value of interventions designed to alter the rate of CT ordering, we first need to consider what is the appropriate rate of CT ordering. It does no good if a computer program decreases CT usage if less ordering results in worse patient care. In this study, they were using previously validated decision rules, which mitigates that problem to some extent, but not entirely. For example, when applying the Canadian CT head rule, should the computer program focus only on the criteria that predict the need for neurosurgical intervention, or should they also include the extra points that try to catch all brain injury? How should they discuss the emphasis that these rules place on sensitivity over specificity, and how that influences the role of clinical judgment? (Not every 66 year old who hits his head needs a CT scan, even though they happen to fail this specific tool.)

The PERC rule, although much maligned, is an excellent example of a clinical decision tool. This isn’t because it can or should be applied in all patients. This isn’t because it necessarily decreases use of imaging. In fact, the PERC rule is frequently misused and in many settings probably increases imaging use. However, the PERC rule was derived based on a very important concept: the test threshold.

Although we don’t like to think about them, there are harms associated with every test we order. These can be direct harms, like radiation, but more often they are nebulous harms like overdiagnosis and unnecessary follow-up tests or treatment. There are harms from false positive tests. There are also harms from treatment if the test comes back positive. However, if we are testing for an important condition, such as PE, the benefits of finding and appropriately treating the PE should outweigh those harms.

The harms of testing and benefits of treating are relatively fixed. What changes for each patient is the chance that they actually have the disease. Every patient is exposed to the harms of testing, but only those with disease can see any benefit. If you tested me for PE right now, you would almost certainly be doing me harm, because I would experience the harms of the test, but as I have no symptoms at all, there is no hope of benefit.

The test threshold is a concept that tries to gather all those harms and benefits into a mathematical equation to determine the balance point. For PE, Dr. Kline calculated a test threshold of 2% when designing the PERC rule. This means that if you have a greater than 2% chance of having a PE, the potential benefit of finding that PE and treating it outweighs the various harms of testing. However, if you have less than a 2% chance of PE, the harms of testing are greater than any potential benefit you could get, even if you do have a PE. (There is a related concept, call the treatment threshold, that tells us that if you are at a high enough risk, we should just start treatment without confirmatory tests, because the harm of those tests will not be outweighed by any added benefit).

First10EM thest threshold.png

This has all been a very long winded way of saying that I think embedding clinical decision rules into an EHR is a great idea, but only if we truly understand the clinical implications of testing. It is not good enough to claim that a rule is 100% sensitive. We also have to show that it is specific enough not to be causing harm. For all our decisions, we need to understand test thresholds.

I don’t want to imply that the process of making a diagnosis should be entirely mathematical. The danger of embedding clinical decision tools in the EHR is that it could turn clinicians into robots. Slavishly adhering to guidelines negates the wealth of clinical experience that the physician draws upon when making decisions. This judgment is important. To date, I am not aware of any clinical decision tool that has been demonstrated to routinely outperform clinical judgment. And even with the best of our rules, we have to understand that probability does not capture outliers in the way that an astute clinician can.

That being sad, we too often use clinical judgement as an excuse to order more tests. We know that we are straying from well justified guidelines, but we claim that is the art of medicine. It is important to be able to do this occasionally, but it is also essential not to stray routinely. The test threshold reminds us that tests can harm our patients as much as, or even more than a missed diagnosis. The real question is how to design clinical decision systems that allow for judgment to override statistics occasionally, without letting our flawed and biased human brain get too out of control.

I think a better understanding of the test threshold can help us here as well. When integrating clinical decision rules into the EHR, I would not display the result as a binary: based on this rule you should or shouldn’t perform a test. I would use the data to present the physician (and patient) with a pretest probability and display that right next to the calculated test threshold for the relevant disease process.

That way, if the probability of PE based on a validated rule is 2.1%, but the physician, based on years of clinical experience, thinks that this patient isn’t well represented by the statistics and actually has a lower chance of PE, there is room for clinical discussion and judgement. It makes sense to apply judgement when you are working with probabilities close to the test-threshold. However, if a clinician thinks a patient is low risk, but the validated tools indicates a 30% chance of PE, that would provide the clinician with a chance to readjust her gestalt, realizing that there are times when statistics are more powerful than one individual’s grey matter.

Clearly, we need to address the issue of variance in medical practice. The care you receive should not vary drastically based on the physician you are randomly assigned that day. However, we need to be wary of overly simplistic tools that simply cannot compete with the expertise of an experienced clinician. I am sure we will see clinical decision tools integrated into EHRs more frequently in the future. To see this done appropriately, we need to consider patient important clinical outcomes, and we all need to understand the concept of the test threshold.


Although we like these rules because the seem to provide certainty, medicine is in fact full of diagnostic uncertainty. For a discussion of how we should communicate that diagnostic uncertainty, see this post.

Cite this article as:
Morgenstern, J. Embedding decision tools into the electronic record, First10EM, July 18, 2017. Available at:
https://doi.org/10.51684/FIRS.4817

Leave a Reply

Discover more from First10EM

Subscribe now to keep reading and get access to the full archive.

Continue reading