Why pretest probability is absolutely essential

Why pretest probability is essential

You can’t interpret the results of a test without knowing the pretest probability. 

I am sure we have all heard the same lecture about screening tests. I am sure that we have all been surprised by the strange numbers that result from applying seemingly excellent tests in low risk populations. I am sure that we all know that we shouldn’t use pregnancy tests on men.

But those classroom examples are too easily forgotten when working busy emergency shifts. We order hundreds of tests every hour (if you consider each lab test separately), and we simply don’t have time to struggle with Bayes’ formula with each test. 

Most of the time we get by. The math works out without being acknowledged, or we ignore test results (such as erroneous white blood cell counts) without formally acknowledging the Bayesian explanation for them being wrong. But sometimes we get this incredibly wrong. Sometimes this hurts our patients.

So it is important to be reminded: you can’t interpret test results without knowing the pretest probability.

An example: Screening tests 

The most surprising results of this principal come from screening. I will use a theoretical example lifted from the excellent textbook “Cognitive Errors and Diagnostic Mistakes” by Jonathan Howard: (Howard 2019)

Imagine a new CT scan that never misses a case of breast cancer (is 100% sensitive), but results in a false positive reading in 5% of healthy women (is 95% specific). It is a fantastic test – more accurate than most that we use. We would like to use it to detect breast cancer early, as part of a screening protocol. In women under the age of 50, the rate of breast cancer is 1 in 1,000. If Robin, a 45 year old woman, has a positive test, what is the likelihood that she has cancer? (Test yourself – make a guess now).

It sounds like Robin has a pretty high chance of cancer. After all, a very accurate test says she has cancer. But let’s do the math. In a sample of 1,000 women, we expect 1 to have cancer. The CT is perfect and identifies the one woman with cancer. However, the 5% false positive rate means that out of this group of one thousand women, 50 will be given false positive results. There are 51 positive tests and only 1 true case of cancer. Therefore, Robin’s chance of having cancer, despite the positive CT, is 1/51, or about 2%.

A positive result on a very accurate test, and there is still only a 2% chance the patient has the disease?!

Test results, especially those from high tech tests like CTs and MRIs, are too often treated as perfect. We simply accept the results as “the diagnosis”, but the case of Robin is an excellent reminder of the fallibility of our tests. Even if the CT was 99% specific, the posttest probability would still only be 10%. That is surprising. We don’t expect accurate tests to be wrong more often than they are right.

You might argue that 1 in 1,000 is a very low pretest probability. In emergency medicine, we are looking after symptomatic patients with a higher baseline incidence of disease. (Unfortunately, if you consider our use of stress tests, I think you will find this assumption to be incorrect.) For that reason, I think the follow up example is even more interesting. Suppose we apply the same CT to a 70 year old woman, who has a 10% pretest probability of disease. In a group of 1,000 patients, now 100 patients will have breast cancer, and the CT will identify them all. Of the 900 healthy women, 45 will have positive CTs. So the results are much better. If you have a positive CT, you have a 69% (100/145) chance of having cancer. However, even in a scenario with a moderate pretest probability and a very accurate test (much better than most we use in emergency medicine) there is still a 30% chance that this is a false positive!

How does this apply to emergency medicine?

Tests need to be interpreted (or better yet ordered) after consideration of the pretest probability. 

I frequently hear stories of “great catches”. Of doctors who ordered a CTPA, despite the patient being low risk for PE and PERC negative. Low and behold, the CT is positive. The doctor widely brags about this great save. Residents are taught about the fallibility of the PERC rule, and ultimately more CTs are ordered. 

You can probably see where this is going. Let’s do that math. After an appropriate patient is ruled out by the PERC score, she has approximately a 1.4% chance of PE. (Kline 2004) A CT pulmonary angiogram is a pretty good test, although I have previously discussed data demonstrating that radiologists often disagree about the final read. (Miller 2015) The best data we have probably comes from the PIOPED II study, which found that a CTPA has a 83% sensitivity and 96% specificity when compared to traditional pulmonary angiography. (Stein 2006) CT technology has changed since the PIOPED study, and so the sensitivity is almost certainly better (but I had a very hard time finding a modern estimate). For the sake of our calculations, I will just assume a 95% sensitivity.

Thus, for every 1000 low risk PERC negative patients seen in the ED, there will be 14 PEs. CT will catch 13 of these 14 patients. For the remaining 986 patients, CT will be falsely positive in 39. Therefore, the CT will be positive in 52 total patients, but only 13 of these patients (25%) actually have a PE.

So when a colleague brags about finding a PE in a low risk, PERC negative patient, there is a 75% chance they are wrong. There is a 75% chance that the patient has been given unnecessary anticoagulantion. A 75% chance that, although the CT was a false positive, the patient will rush to the emergency department for any chest pain or shortness or breath for the rest of her life, getting many more tests (and potentially more false positives). In other words, there is a 75% chance we are hurting this patient.

You can’t interpret the results of a test without knowing the pretest probability.

This is true of all our tests. Whether you are ordering a CT, a blood count, an x-ray, or an ECG. You can’t interpret the results of a test without knowing the pretest probability. Trying to do so will hurt your patients.

References

Howard, J. (2018). Cognitive Errors and Diagnostic Mistakes. [S.l.]: Springer International Publishing.

Kline JA, Mitchell AM, Kabrhel C, Richman PB, Courtney DM. Clinical criteria to prevent unnecessary diagnostic testing in emergency department patients with suspected pulmonary embolism. Journal of thrombosis and haemostasis : JTH. 2004; 2(8):1247-55. [pubmed]

Miller WT, Marinari LA, Barbosa E, et al. Small Pulmonary Artery Defects Are Not Reliable Indicators of Pulmonary Embolism. Ann Am Thorac Soc. 2015. PMID: 25961445

Stein PD, Fowler SE, Goodman LR, et al. Multidetector computed tomography for acute pulmonary embolism. The New England journal of medicine. 2006; 354(22):2317-27. [pubmed]

Photo by Crissy Jarvis on Unsplash

Cite this article as:
Morgenstern, J. Why pretest probability is absolutely essential, First10EM, October 15, 2019. Available at:
https://doi.org/10.51684/FIRS.9601

Leave a Reply

8 thoughts on “Why pretest probability is absolutely essential”

Discover more from First10EM

Subscribe now to keep reading and get access to the full archive.

Continue reading