Another month, another set of articles. Some clinically relevant. Some just thought provoking. One is more than 100 years old. Yes, I look everywhere for these papers.
The BroomeDocs podcast version can be found here: https://broomedocs.com/2024/10/first10em-journal-club-october-2024/
Clinically but not statistically significant: what do you do?
Turgeon AF, Fergusson DA, Clayton L,et al. Liberal or Restrictive Transfusion Strategy in Patients with Traumatic Brain Injury. N Engl J Med. 2024 Jun 13. doi: 10.1056/NEJMoa2404360. PMID: 38869931
This is a pragmatic, multicenter, open-label, blind end-point, randomized trial that compared a liberal (triggered by a hemoglobin less than 100 g/L) to a conservative (triggered at 70 g/L) transfusion strategy in 742 adult patients with traumatic brain injury. For their primary outcome, they did something I have never seen before, in that they adjusted the outcome for each patient based on their initial prognosis. I am not sure that makes sense, but ultimately it is not the main discussion point for this paper. They key point is that there was no statistical difference in patients with an unfavorable functional outcome at six month, but there was a pretty big absolute difference that would definitely be clinically relevant if real (68.4% in the liberal group and 73.5% in the conservative group; ARR 5.4%, 95% CI -2.9 to 13.7%). The scientific interpretation here is easy: this is a negative trial with a hypothesis generating result, and it clearly mandates a much bigger follow up RCT. The hard question is what we should do while waiting for that follow up study. On the one hand, I could argue that liberal transfusion might already be standard care. Furthermore, I don’t believe there are strictly positive and negative trials, and so this absolute benefit should push us toward liberal transfusion. On the other hand, statistics are only one aspect of a trial. This is an unblinded trial, with a completely subjective outcome, which means it is at high risk of bias. We have a false positive problem in medical research, with trials tending to overestimate benefit and underestimate harms. Therefore, when a trial is statistically negative, we should probably believe it. Ultimately, I don’t think the clinical implications are as clear as many will make them out to be. I think we are going to have to use clinical judgment, carefully considering the potential harms of transfusion, as well as indicators of benefit. (Is this patient acutely or chronically anemic? Are there signs of hypoperfusion?) I think it is fair to err on the side of liberal transfusion, but I don’t think we should make it the standard of care.
Bottom line: This large multicenter RCT comparing liberal and restrictive transfusion triggers in patients with traumatic brain injury was statistically negative, but may actually support more liberal transfusion.
Can we do better for smokers?
Pope I, Clark LV, Clark A, Ward E, Belderson P, Stirling S, Parrott S, Li J, Coats T, Bauld L, Holland R, Gentry S, Agrawal S, Bloom BM, Boyle AA, Gray AJ, Morris MG, Livingstone-Banks J, Notley C. Cessation of Smoking Trial in the Emergency Department (COSTED): a multicentre randomised controlled trial. Emerg Med J. 2024 Apr 22;41(5):276-282. doi: 10.1136/emermed-2023-213824. PMID: 38531658
I feel like emergency medicine spends a lot more time and energy on other substances. Of course, I still ask my patients about smoking, and briefly counsel everyone, but how much energy do you put into smoking as compared to something like fentanyl? (Perhaps I just feel this way because smoking seems to be far less prevalent these days.) This is a multicenter RCT of a smoking cessation program from 6 emergency departments in the United Kingdom. They looked at any adults in the emergency department (including visitors) who smoked cigarettes daily (and had a confirmed expired carbon monoxide level above 8 parts per million) and did not currently use an e-cigarette. They compared a cessation intervention undertaken face-to-face in the ED, comprising three elements – (1) brief smoking cessation advice (up to 15 min), (2) the provision of an e-cigarette starter kit plus advice on its use (up to 15 min) and (3) referral to local stop smoking services – to a control group that was simply given a pamphlet with smoking cessation resources. They enrolled 972 patients, and for their primary outcome of biochemically verified abstinence at 6 months the intervention resulted in significant improvement (4.1% vs 7.2%; ARR 3.3%, 95% CI 0.3% to 6.3%). There were no harms noted, a claim I am often skeptical of, but probably makes sense in a harm reduction study. There are a few issues with the study. The control group isn’t a pure control group, as the enrollment process contained a pretty in-depth conversation about smoking, and the participants knew they would be contacted again, and re-tested, about their smoking. Feasibility in the average busy community emergency department is also a question. They had trained counselors, and the intervention group was allotted 30 minutes of counseling. I don’t have those resources, and 30 minutes is precious time in the emergency department. I think this is part of a broader question of how much public health and primary care can be accomplished in the emergency department. This very much might be more valuable than checking everyone’s tetanus status, or some of the many other non-emergent tasks we are asked to shoulder, but I think whenever non-urgent tasks are added it is very important to think about the opportunity cost, and whether we are actually being given appropriate resources to tackle the issue. That being said, a 3% absolute decrease in confirmed smoking at 3 months is an important difference.
Bottom line: If you have the time and resources, this RCT tells you that you probably can have a successful smoking cessation program in your emergency department.
You can read the full blog post here.
Doc, I think I’m dying
Mols EM, Haak H, Holland M, Schouten B, Ibsen S, Merten H, Christensen EF, Nanayakkara PWB, Nickel CH, Weichert I, Kellett J, Subbe CP, Kremers MNT; Safer@Home Research Consortium. Can acutely ill patients predict their outcomes? A scoping review. Emerg Med J. 2024 May 28;41(6):342-349. doi: 10.1136/emermed-2022-213000. PMID: 38238065
A commonly taught emergency medicine maxim is that “if a patient tells you that they feel like they are going to die, you should believe them.” Is that teaching evidence based? Unfortunately, this study doesn’t quite answer that question. It is a systematic review looking at patient self-prognosis. They identified 10 relevant papers. They looked at 3 major categories of outcomes: hospital admission, general health, and life expectancy. Without reading the original papers, the first 2 categories seem somewhat ridiculous to me, as there is no real gold standard, and you are just comparing one subjective opinion against another. Hospital admission will inherently be biased, as patients have at least some degree of control (“I am too weak to go home”), but it isn’t like physicians know exactly which patients need to be in hospital and which are safe to go home. Patients agree with physicians about admission decisions with a relatively high degree of accuracy in 2 of 3 studies, but could only predict need for admission 64% of the time in another study. Length of stay prediction was worse, with most patients thinking they needed to stay in hospital longer than their doctors and nurses. The comparison between patient perceived severity of injury and the injury severity score (ISS) seems even more ridiculous to me. They treat the ISS as the gold standard, and state that 68% of patients overestimate their injuries. Knowing the accuracy of decision tools in medicine, I think I would be more inclined to believe a patient about the severity and impact of their injuries than the ISS. (This is excluding the risk of mortality, which is the next question.) To really highlight how dumb these comparisons were, one study (Twibell 2015) concluded that patients were not good at predicting their risk of falling while admitted to hospital. In patients who nurses thought had a high risk of falls, 55% of patients thought they were not at high risk, and the authors have the gall to state the patients were wrong despite the fact that not a single patient fell during the entire study! Even the mortality studies were somewhat disappointing, because they compared patient prognosis to risk scores (APACHE or Seattle Heart Failure Model) rather than, you know, just prospectively measuring death. There was one ICU study which showed that only 37% of patients who died in the ICU thought they were going to die, and that fits with lived experience, although the outcome is heavily skewed by how and when you ask the question. (Many patients are trying to remain optimistic, and will answer questions as such, but I expect they have some insight into their own mortality.)
Bottom line: I was excited to read this study, but unfortunately I don’t think it gives us any usable data. When it comes to subjective assessments, such as how badly are you hurt or how much help do you think you need, I would tend to trust patients over scores. The accuracy of patient predictions for future outcomes, like death, MI, or stroke, is an interesting research question, but really not addressed here.
Alternative bottom line: Your patients will have thoughts and expectations about their clinical course, and they might not match with yours, so you should try your best to set expectations and communicate your thoughts in clear and simple language.
Could IV antibiotics finally beat oral? (Of course not)
Nielsen AB, Holm M, Lindhard MS, et al. Oral versus intravenous empirical antibiotics in children and adolescents with uncomplicated bone and joint infections: a nationwide, randomised, controlled, non-inferiority trial in Denmark. Lancet Child Adolesc Health. 2024 Jul 15:S2352-4642(24)00133-0. doi: 10.1016/S2352-4642(24)00133-0. Epub ahead of print. PMID: 39025092
Some topics come up over and over again, and it seems somewhat repetitive or wasteful to spend so much time on them, but seeing as so many people are still using outpatient IV antibiotics despite overwhelming evidence that oral antibiotics are just as good, if not better, I will continue to cover papers as they arrive. This is an important, relatively large, multicentre RCT comparing IV and oral antibiotics in pediatric patients with bone and joint infections, but unfortunately it all sort of falls apart based on their chosen primary outcome. They wanted to see how many children had sequela of infection at 6 months, and it turns out that nobody in either group had any ongoing sequela. It is really hard to make any sort of scientific comment when the thing you are looking for literally never happens. That being said, nothing bad happened to these children at all, including short term outcomes like ICU use and septic shock. If the kids are going to be perfectly fine no matter what you do, why the hell would you use IV antibiotics? I will note that adverse events were higher in the oral group, but the adverse events they decided to measure were incredibly biased. They basically only looked at GI side effects, and make no mention of IV failure, needle sticks, or local skin reactions. It is a very good example of how harms data is often very biased in big clinical trials.
Bottom line: Once again, oral antibiotics are at least as good as IV, even in relatively severe infections, although this trial has more problems than previous trials showing the same thing.
Is early CT actually 100% sensitivity for SAH? (Of course not)
Trainee Emergency Research Network (TERN). Subarachnoid haemorrhage in the emergency department (SHED): a prospective, observational, multicentre cohort study. Emerg Med J. 2024 Sep 12:emermed-2024-214068. doi: 10.1136/emermed-2024-214068. Epub ahead of print. PMID: 39266054
Although there were flaws with the initial study, and it was obvious that CT was never going to be 100% sensitive for subarachnoid hemorrhage, the preponderance of the data is that CT without LP was good enough for most patients (probably out to 24 hours). This is a big multicentre effort (led by trainees in the UK) to replicate the Perry data, enrolling 3663 consecutive patients from 88 emergency departments who had non traumatic headaches that peaked in intensity within 1 hour. 89% had a CT performed, but only 24% had a CT done within 6 hours of headache onset. (That is odd to me, as almost all my potential SAH patients have their CT done within 6 hours, so it speaks to either systems issues in the UK delaying CT, or a healthier population presenting to the ED later than I am used to.) 237 patients were diagnosed with SAH, for a prevalence of 6.5%, in keeping with other studies. 88% of those SAH diagnoses were based on the initial CT, while the rest were based on LP or follow-up imaging. There were 183 patients with alternative significant pathology. Some of this was obvious on the plain CT, but at least half would require either CT angiogram or lumbar puncture to diagnose. Only 35% of patients had an LP performed. Sensitivity of plain CT for SAH was 97% in less than 6 hours, 95% between 6 and 18 hours, and only 75% between 18 and 24 hours. This somewhat contradicts other studies that suggest very high sensitivity all the way to 24 hours, although most have not had the additional cut point at 18 hours. The big problem with this study is the lack of a gold standard, and that is particularly true when most patients had no further testing at all (no LP or CT angiogram). They could have easily missed cases of SAH here, which would lower the sensitivity. They also comment on the specificity, but I don’t think it is appropriate to do so when you are essentially using the test under investigation as the gold standard for the diagnosis. (Ie, the CT was positive, therefore the patient has SAH, therefore the CT is very good at diagnosing SAH.) I think this data fits very well with the existing data, and suggests that CT scan up to 18 hours at least is very sensitive.
As an aside, one thing that bothers me about research is how seemingly incidental research definitions are sometimes translated into clinical practice as if they are scientifically sound. The original Perry paper looked at headaches that developed within 1 hour, and so subsequent studies like this one also use that definition. That is good for research consistency, but it was actually a pretty arbitrary decision, and should not be taken to be the clinical definition of a ‘thunderclap’ headache.
Bottom line: In most patients, a negative CT is enough to preclude the need for further testing if you are only concerned about SAH. CT angiogram and LP remain important tests for alternative pathology, such as vascular disease, infections, and inflammatory headaches.
MagA: Making Airways Great Again?
Zouche I, Guermazi W, Grati F, Omrane M, Ketata S, Cheikhrouhou H. Intravenous magnesium sulfate improves orotracheal intubation conditions: A randomized clinical trial. Trends in Anaesthesia and Critical Care. 2024; 57:101371-.
It seems like, at least at some point in the history of medicine, we have tried to use magnesium as a miracle for basically every pathology known. If I was going to start a snake oil company, magnesium would definitely be an ingredient, and that might be enough to get the endorsement of most emergency doctors. That being said, even I had never considered using magnesium to improve intubating conditions. This study randomized 76 low risk, elective, adult patients in the operating room undergoing sedative only intubations (with fentanyl and propofol, but no paralytic) to either 50 mg/kg of magnesium sulfate or normal saline placebo. Based on the way they write up their results, it seems like their primary outcome was intubation conditions, but they never actually declare a primary outcome, and there are many many things that they say they measured in this study. I don’t think this trial protocol was pre-registered, so the lack of a clear primary outcome is even more concerning. I will also note that although it is a blinded trial, it was conducted entirely by 2 anesthesiologists, one of whom drew up the meds and then handed them to the other, so unblinding would have been very easy. For what it is worth, they claim that magnesium has miraculous effects on the intubating conditions. Laryngoscopy was “easy” for 97% of the magnesium group as compared to 66% of the placebo group. In fact, everything was dramatically better, with better vocal cord position (in these unparalyzed patients) and less coughing after intubation.
This trial has an absolute fatal flaw, which should have been caught by anyone reading the protocol, before the trial even started. They are running a study looking at intubating conditions, but then allowed patients to be excluded because of an “unanticipated difficult intubation” (in addition to excluding anyone with an anticipated difficult airway). Think about how silly that is. My dowsing rod works every single time, as long as I am allowed to exclude scenarios of “unanticipated difficulty in water finding”. You can’t have the primary outcome of a trial also act as an exclusion criteria. That is crazy. Now, a patient was only declared an unanticipated difficult airway after a second attempt with a paralytic was made, and this only happened in 7 patients, so it isn’t clear that it impacted the results, but I am wary of any study with a fundamental flaw in their methodology.
Bottom line: This study really isn’t relevant, as it stands, to emergency medicine. We strongly advise against sedative only intubations in essentially all emergency patients. Even outside of emergency medicine, the results aren’t ready for prime time, but it is and intriguing hypothesis that warrants further study.
You can read more on BroomeDocs: MagA: Making Airways Great Again?
Too tall, got cancer
Wang F, Xu X, Yang J, Min L, Liang S, Chen Y. Height and lung cancer risk: A meta-analysis of observational studies. PLoS One. 2017 Sep 26;12(9):e0185316. doi: 10.1371/journal.pone.0185316. PMID: 28949980
I am not sure why this never occurred to me before, but more cells in your body means more chance of cancer. That means that the taller you are, the higher your risk of cancer. Apparently this is very well known, and the association is seen consistently across almost all types of cancer. I included this paper as an example of many similar papers I found during a 30 minute PubMed rabbit hole. This systematic review found 16 observational trials, encompassing 4,709,101 patients, of whom 33,824 were diagnosed with lung cancer. As a binary cut-off, taller patients had a 15% increased risk of lung cancer (RR 1.15, 95% CI 1.04-1.26). The relative risk increased by 6% (RR 1.06, 95% CI 1.03-1.09) for every 10 cm of extra height. Aside from added biomass, I can’t think of clear confounders that would lead to height causing lung cancer. I can picture some inverse confounders, such as short people working in coal mines, or short people taking up smoking to try to make them look cooler than their tall counterparts. I am usually very skeptical of this kind of observational data, because confounders are so easy to overlook, which is why I spent so long on PubMed. This association is amazingly consistent across almost all types of cancer (I found papers for breast, colorectal, lung, prostate, melanoma, endometrial, ovarian, kidney, lymphoma, leukemia, and pancreatic cancer), so it seems that height is an independent risk factor for cancer. Of course, this has almost nothing to do with emergency medicine, and aside from voluntary amputations, there is not much you can do about your height, so I am not sure why anyone would want to know this, aside from the value in remaining generally curious about human biology.
Bottom line: I hope I include enough valuable clinical information in these write ups to justify my crazy tangents. (Just wait until I include the paper on the success rate of amputati ons among ants!) And that leads me to…
Yes, this paper is more than 100 years old
The Relationship Between Herpes Zoster, Syphilis and Chickenpox. JAMA. 2019 Nov 5;322(17):1722. doi: 10.1001/jama.2018.15583. PMID: 31688874
This, without a doubt, sets the record for the oldest paper I have ever discussed. Originally published in 1919, this is a discussion of herpes zoster and its unusual relationship with both syphilis and chickenpox, before the underlying pathophysiology was ever known. The entire article is only about 600 words long, and I think it is well worth the read, both as a marker of the logic and detective work required by physicians before our ubiquitous lab tests became available, but also as a marker of how far we have come in just 100 years. It is easy to think of medicine as plateauing. It feels like we have picked all the low hanging fruit. We have figured out antibiotics, surgical anaesthesia, and vaccines. We now expect most clinical trials to be negative. Drugs are more likely to be “me too” copies than something like aspirin. But that view is rather myopic, given that none of those things existed just 100 years ago. Medicine is still very much in its infancy. There are undoubtedly many world changing discoveries yet to come. I am fascinated and optimistic about what medicine might look like in another 100 years, let alone 250 or 500. However, to get there, I think we will need to rediscover some of the scientific curiosity displayed in this early publication. We can’t leave medical science in the hands of big corporations, whose only goal is to drive profit. We need to rediscover some of the fascination with basic pathophysiology, which drives questions like “why is zoster more common in patients with syphilis?”, and “why does chicken pox often arrive in households shortly after a case of zoster?”
Bottom line: Curiosity is one of the core aptitudes of great medicine.
Morgenstern, J. The October 2024 Research Roundup, First10EM, October 14, 2024. Available at:
https://doi.org/10.51684/FIRS.137838
One thought on “The October 2024 Research Roundup”