Research Round-Up – August 2020

Research Roundup First10EM best of emergency medicine research

It is already August! Even in COVID times, the days just seem to fly by. If you are looking for something boring to stretch those precious minutes out, I have another collection of evidence based medicine for you…

HALT using TXA in GI bleeds

HALT-IT Trial Collaborators. Effects of a high-dose 24-h infusion of tranexamic acid on death and thromboembolic events in patients with acute gastrointestinal bleeding (HALT-IT): an international randomised, double-blind, placebo-controlled trial. Lancet. 2020;395(10241):1927-1936. doi:10.1016/S0140-6736(20)30848-5 PMID: 32563378

Very quick summary: TXA doesn’t help in GI bleeds. This is a massive (12,009 patient) multi-centre RCT comparing TXA to placebo in GI bleeding, and there was absolutely no difference in mortality, rebleeding, surgery, need for endoscopy, or transfusion. The only difference is a hint at harm, with a doubling of venous thromboembolism from 0.4% to 0.8% (NNH 250). 

Bottom line: TXA should no longer be used for patients with GI bleeding, as there is no benefit, and potentially a small increase in significant harms.

Dexamethasone for the RECOVERY

RECOVERY Collaborative Group, Horby P, Lim WS, et al. Dexamethasone in Hospitalized Patients with Covid-19 – Preliminary Report. N Engl J Med. 2020;10.1056/NEJMoa2021436. doi:10.1056/NEJMoa2021436 PMID: 32678530

Steroids for pneumonia has been one of those topics where the evidence sort of yoyos back and forth over time. I will be back soon with a deep dive into the evidence for steroids in community acquired pneumonia, but for now all anyone cares about is COVID. Among a lot of really bad science and false claims in the COVID era, this is the first trial that really demonstrates a truly important improvement in outcomes. It is a pragmatic, open-label, multi-centre trial that included 6425 suspected COVID patients, 2104 of whom were given dexamethasone 6 mg daily and 4321 of whom received standard care. Mortality was lower with dexamethasone (21.6% vs 24.6%). (Those numbers are nowhere close to what the general public envisions when they hear that we have a treatment for COVID, and we don’t even know the functional status of those improved survivors.) The benefit looks biggest in the sickest patients. Conversely, there doesn’t seem to be any benefit in patients not requiring oxygen therapy. This isn’t a perfect trial – the lack of blinding is probably the biggest issue – but I think it is enough to support prescribing dexamethasone admitted COVID patients requiring oxygen therapy.

Bottom line: In this unblinded RCT, dexamethasone reduced mortality in admitted COVID patients. 

You can read more here.

What to believe and when to change

Carley S, Horner D, Body R, Mackway-Jones K. Evidence-based medicine and COVID-19: what to believe and when to change. Emerg Med J. 2020;emermed-2020-210098. doi:10.1136/emermed-2020-210098 PMID: 32651176

After mentioning all the bad science and false claims we have seen in COVID, I thought I had better briefly mention this amazing paper from some friends of mine discussing evidence based medicine in the COVID era. In medicine, we always have the tendency to overestimate the benefits of our interventions and underestimate their harms. This seems to be magnified during a pandemic, where there are strong external pressures to act quickly and with very limited information. These authors argue that the pandemic is not a time to deviate from our usual scientific practices. In fact, the high stakes and limited information mean that we need to be even more rigorous in our appraisal of the available science. There are numerous harms in the over-zealous adaptation of unproven therapies like hydroxychloroquine, from the direct harm to patients, distraction and suboptimal use of available resources, potential loss of equipoise, and probably even generation of further distrust in science. The authors point at the RECOVERY trial, among others, as evidence that high quality medical research is still possible during a pandemic, and must be the standard we strive for.

Really – just stop with the pink lady already

Warren J, Cooper B, Jermakoff A, Knott JC. Antacid monotherapy is more effective in relieving epigastric pain than in combination with lidocaine. A randomized double-blind clinical trial. Acad Emerg Med. 2020;10.1111/acem.14069. doi:10.1111/acem.14069 PMID: 32602148

This is a single-center, partially blinded RCT examining the practice of adding lidocaine to an antacid for patients with dyspepsia or epigastric pain. There were three groups: antacid alone, antacid plus viscous lidocaine, and antacid plus lidocaine solution. The lidocaine was only 2%, and some people use 4%, but I don’t think it matters at all. There are no statistical differences in pain control, and the lidocaine groups actually look a little worse. There are more side effects in the lidocaine group, and the patients dislike the lidocaine because it tastes awful. There are prior RCTs saying the same thing.

Bottom line: Don’t prescribe the pink lady. Adding lidocaine provides no benefit. It just adds side effects and makes patients miserable.

You can read more here.

Osteoarthritis is such a pain

Lindblad AJ, McCormack J, Korownyk CS, et al. PEER simplified decision aid: osteoarthritis treatment options in primary care. Can Fam Physician. 2020 Mar; 66(3):191-193. Available at: https://www.cfp.ca/content/66/3/191

This is a family medicine article about osteoarthritis, so it might feel a little out of place. I include it for a few reasons. First, I had no idea that SNRIs might be effective for OA, but apparently there are multiple RCTs, and they show that “Duloxetine can meaningfully reduce osteoarthritis pain scores (by at least 30%) for ~60% of patients compared to ~40% on placebo. An average pain of ~6 (scale 0-10) will be reduced by ~2.5 points, compared to 1.7 on placebo. Duloxetine adverse effects lead to withdrawal in 12% of patients versus 6% on placebo.” That quote is from the excellent Tools for Practice publication. The other reason I include this specific publication is that I think that it is a beautiful way to communicate data. I wish more guidelines could adopt formats like this, which provide beautiful design that aids understanding, while maintaining a reasonable tie to good science. As they are available open access, I will just include the images here so you can see what I mean. 

Why can’t we just have droperidol?

McCoy JJ, Aldy K, Arnall E, Petersen J. Treatment of Headache in the Emergency Department: Haloperidol in the Acute Setting (THE-HA Study): A Randomized Clinical Trial. J Emerg Med. 2020;S0736-4679(20)30349-8. doi:10.1016/j.jemermed.2020.04.018 PMID: 32402480 NCT02747511

I’ll say right off the bat, I think it is unethical to run a placebo controlled migraine trial. There are many remaining questions, but we have good evidence that drugs like droperidol work in relieving migraines. Patients shouldn’t be left in pain just for the sake of science. They do say that the IRB approved the trial, but don’t comment otherwise on the ethics. 

This is a single-center RCT that randomized 118  emergency department patients, aged 13-55, with a chief complaint of migraine or headache, to either haloperidol 2.5 mg IV pushed slowly over 1-2 minutes or matching placebo. There are some significant issues with the study. Exclusions were reasonable, but would have allowed clinicians to include non-migraine benign headache types, like tension type headaches, which are generally treated quite differently than migraines. They stopped the trial early after an unplanned interim analysis, which significantly increases the chance of bias. The study was registered, but it appears to have been registered after enrolling the vast majority of patients, completely eliminating the value of the register. The biggest problem is their use of placebo.

Unsurprisingly, the haloperidol worked. (We already knew haloperidol worked. See, for example, the RCT we covered in June 2015.) The haloperidol group reported a 4.8/10 decrease in their pain score at 60 minutes, as compared to only 1.9/10 with placebo. 78% of the placebo group required rescue analgesia, as compared to 31% of the haloperidol group. They did ECGs on everyone, and the QT length was not different between the groups. 9 patients (16%) in the haloperidol group developed restlessness, and it resolved with diphenhydramine in 8 of 9 patients. Fewer patients in the haloperidol group had their symptoms return at 24 hours (33% vs 51%), and fewer returned for additional care (7% vs 18%), but there were more side effects at 24 hours, primarily restlessness (15% vs 7%). This whole exercise likely seems irrelevant to anyone who has droperidol available, but makes sense to those of us who don’t. However, despite all their efforts, I think this study was a waste of time. It doesn’t give us any valuable information. No one was treating migraine with placebo. We need to know how haloperidol compares to metoclopramide or prochlorperazine, both in terms of efficacy and side effects. Without that information, it is unclear why anyone would change their practice based on this data.

Bottom line: We could have easily guessed that haloperidol is more effective than placebo for managing migraines. Unfortunately, this study fails to provide the information we really need, by comparing it to treatments we are already using.

Connecting with our patients

Zulman DM, Haverfield MC, Shaw JG, et al. Practices to Foster Physician Presence and Connection With Patients in the Clinical Encounter [published correction appears in JAMA. 2020 Mar 17;323(11):1098]. JAMA. 2020;323(1):70–81. doi:10.1001/jama.2019.19003 PMID: 31910284 [article]

I really like this paper. They use multiple sources of data to develop recommended practices to help physicians create better connections with their patients. A lot of the recommendations seem like common sense after you read them, but I would bet that many people aren’t using them routinely, and almost everyone could find at least one or two tips if they take the time to read the full paper. Their 5 main themes are: prepare with intention, listen intently and completely, agree on what matters most, connect with the patient’s story, and explore emotional cues. If you want to read a little more about those themes, I have a full blog post here, but I think you are better off just reading the original paper. 

The lever sign for ACL tears

McQuivey KS, Christopher ZK, Chung AS, Makovicka J, Guettler J, Levasseur K. Implementing the Lever Sign in the Emergency Department: Does it Assist in Acute Anterior Cruciate Ligament Rupture Diagnosis? A Pilot Study. J Emerg Med. 2019;57(6):805-811. doi:10.1016/j.jemermed.2019.09.003 PMID: 31708315

The lever sign is a physical exam maneuver used to diagnose ACL tears that prior studies have suggested has close to 100% sensitivity. It is performed by having the patient lie supine with the knee full extended. The examiner places a closed fist under the tibial tuberosity, and then pushes down on the distal femur (see image). If the ACL is intact, the foot will rise off the stretcher, whereas if it is torn the heel will not raise. 

This is a small (45 patient) pilot study looking at the performance of the lever sign in the ED. They used a before and after study designed, but for some reason they looked at the lever sign in the first half of the study. Seeing as this is the new technique, I would think it would make more sense to do it second, to limit contamination between the groups. They included patients aged 12-55 being evaluated for an acute knee injury. They excluded anyone with other pathology, prior knee injuries, as well as those who didn’t get an MRI, so the results may not extrapolate perfectly to all the patients we see. 8 patients (18%) had an MRI confirmed ACL tear. The sensitivity of the lever test was 100%, as compared to 40% with the combination of the anterior drawer/Lachman tests (obviously with massive confidence intervals). The specificity looks good, but was lower than our usual maneuvers (94% vs 100%). This is clearly not a perfect test. Depending on the quality of our follow-up, you may not need a perfectly accurate physical exam (and physical exam is always harder in the ED while the patient is in acute pain). However, when trying to determine who needs orthopedics follow up, high sensitivity is probably more important than high specificity, so this test may have some advantages.

Bottom line: I will add the lever test to my tool box, but I think the most important test for knee injuries is probably still a repeat exam in a week or so.

It is hard to prevent something that almost never occurs

Tran QK, Rehan MA, Haase DJ, Matta A, Pourmand A. Prophylactic antibiotics for anterior nasal packing in emergency department: A systematic review and meta-analysis of clinically-significant infections. Am J Emerg Med. 2020;38(5):983-989. doi:10.1016/j.ajem.2019.11.037 PMID: 31839514

This is a systematic review and meta-analysis asking the often asked but eminently boarding question of whether prophylactic antibiotics are required after an anterior nasal pack for epistaxis in the ED. Their primary outcome was clinically significant infections such as sinusitis, otitis media, abscess or cellulitis of the face, or toxic shock syndrome. There are only 5 studies with a total of 383 patients, so take any conclusions with a grain of salt, especially when it comes to rare diseases like toxic shock syndrome. Overall, the rate of infection after packing was 0.8%. There was no difference in the rate of clinically significant infections between then two groups (0.5% with antibiotics and 0.6% without, p=0.90). (Yes, both of those numbers are lower than the overall infection rate, which I can’t explain with how they lay out their numbers). The rate of infection is very low overall. As a good example of the under-reporting of harms in the literature, none of the studies broke down adverse events by the group you were in and 1 study didn’t mention them at all. These are small observational trials, so the data is far from perfect. A single RCT would be much stronger than anything we have currently. Ultimately, with such a low infection rate and no difference between the groups, it seems pretty likely that the harms will significantly outweigh any possible benefits. 

Bottom line: You don’t need to prescribe antibiotics to patients with an anterior nasal pack.

Airway checklists don’t work?!

Turner JS, Bucca AW, Propst SL, et al. Association of Checklist Use in Endotracheal Intubation With Clinically Important Outcomes: A Systematic Review and Meta-analysis. JAMA Netw Open. 2020;3(7):e209278. Published 2020 Jul 1. doi:10.1001/jamanetworkopen.2020.9278 PMID: 32614424

This is a systematic review and meta-analysis looking at the value of airway checklists, and their conclusion is that “the findings suggest that use of intubation checklists is not associated with improved clinical outcomes during and after endotracheal intubation.” Looking at the raw numbers, there actually seems to be reductions in esophageal intubation, hypotension, and per-intubation cardiac arrests when checklists are used, but the trials that demonstrate those benefits are at high risk of bias. Overall, I think this review tells us that the quality of the science is pretty low, but doesn’t tell us a lot about the value of checklists. I think my biggest problem with this research is that not all checklists are created equal. A study demonstrating no benefit from a poorly designed checklist doesn’t tell us anything about a well designed checklists. Unfortunately, we rarely consider these issues of design in medicine. However, I think it is also important to acknowledge that checklists could have harms, especially when poorly designed, so I think it is important to see research demonstrating that we are helping patients. 

For a great talk on airway checklists, check out My Checklists Manifesto by Michael Lauria: https://www.youtube.com/watch?v=36vgGdxKkPQ&feature=youtu.be

The problem with medical checklists

Catchpole K, Russ S. The problem with checklists. BMJ Quality & Safety 2015;24:545-549. DOI: 10.1136/bmjqs-2015-004431

My big takeaway from the last paper was that not all checklists are created equal. This paper explores checklists in a little more detail and does a good job explaining why they may not always be helpful as currently employed, and why the evidence is mostly lacking to date in medicine. We often focus on the simplicity of checklists themselves, and ignore the complexity of designing systems and tasks to ensure that checklists can help. Checklists are widely used in aviation, but their introduction was accompanied by significant cockpit redesigns so that critical controls could not be easily confused, and a change in culture that acknowledged the value of checklists and aligned itself to ensure their proper use. Many medical tasks may be too complex for effective checklists (or we may need help redesigning our checklists). Every checklist for the operation of a Airbus A319 (both routine and emergency procedures) can fit on 4 normal sized sheets of paper. The number of items on these checklists ranges from 2 to 17, and each task is described in no more than 3 words! Have you ever seen a medical checklist that had less than 4 words on each line? Different types of checklists may be important in different settings. Some tasks require an exact series of steps to be followed precisely, and so compliance with a strict checklist makes sense. Other tasks require more flexibility or teamwork, in which case strictly ensuring that tasks are checked off a list may in fact be detrimental to performance. Likewise, some checklists are designed to be used as rules, so that everyone follows them rigorously, and others are designed to be used as aids, which may be used differently in different situations. Some checklists are designed to be used to prompt behaviour, while others are designed after the fact, as a final check that all tasks were completed. Confusing one for the other is likely to impair rather than improve performance. Ultimately, checklists may be a necessary part of safe and effective practice, but they will never be sufficient. Checklists require a baseline of competent practice, good communication, and good teamwork to be effective. “A checklist is a complex socio-technical intervention that requires careful attention to design, implementation and basic skills required for the task. Understanding and specifying these mechanisms of effect with greater precision would enable us to move beyond the moot ‘checklists do/don’t work’ commentaries… There is indeed a science to checklists. But, unless we pay attention to the more complex narrative for how they emerged in other industries, including the other changes (to culture, teamwork and design) that accompanied them, we stand little chance of appreciating that science or realising similar benefits in healthcare.”

Cheesy Joke of the Month

A pirate goes to the doctor and says, “I have moles on me back, aaarrrgghh.”

The doctor: “It’s ok, they’re benign.”

Pirate: “Count again, I think there be ten!”

Thanks to Jon for contributing this Joke. I am always happy to have my email box flooded with cheesy jokes. Feel free to contribute your best:

Cite this article as:
Morgenstern, J. Research Round-Up – August 2020, First10EM, August 17, 2020. Available at:
https://doi.org/10.51684/FIRS.39436

Leave a Reply

One thought on “Research Round-Up – August 2020”

Discover more from First10EM

Subscribe now to keep reading and get access to the full archive.

Continue reading