This is bound to be a controversial topic. The conclusion of this systematic review and meta-analysis is that “the findings suggest that use of intubation checklists is not associated with improved clinical outcomes during and after endotracheal intubation.” The FOAMed world has a lot of checklist advocates (myself included, although perhaps not as fervent as some), so that is unlikely to be a popular conclusion. However, we don’t look to science to be popular. We look to science to ensure we are right. So has our adoption of checklists been overzealous? Let’s explore what this study really tells us…
Turner JS, Bucca AW, Propst SL, et al. Association of Checklist Use in Endotracheal Intubation With Clinically Important Outcomes: A Systematic Review and Meta-analysis. JAMA Netw Open. 2020;3(7):e209278. Published 2020 Jul 1. doi:10.1001/jamanetworkopen.2020.9278 PMID: 32614424
This is a systematic review and meta-analysis. They included any study that evaluated an airway checklist, regardless of the content of that checklist, and had a comparator group without checklist use. They excluded simulation studies. Their primary outcome of interest was mortality, but they also looked at hypoxia, hypotension, first-pass success rates, time to intubation, peri-intubation arrest, esophageal intubation, and hospital length of stay.
They identified 11 studies that fit their criteria, which encompass 3261 patients undergoing endotracheal intubation. 7 studies took place in the emergency department, 3 in the ICU, and 1 in both the ICU and OR. There was 1 RCT (rated at high risk of bias, primarily because it was unblinded), 8 before and after observational studies, and 2 case series. In 5 studies, there were significant co-interventions, such as equipment changes or new education, that took place at the same time as the checklist was introduced.
There was no change in mortality (RR 0.97, 95% CI 0.80-1.18). Overall mortality was pretty high, at 11.3%.
For the majority of the secondary outcomes, there were no statistical differences, but the point estimates were on the side of checklists being better:
- Esophageal intubation: RR 0.65, 95% CI 0.30-1.41
- Hypotension: RR 0.68, 95% CI 0.38-1.24
- Peri-intubation cardiac arrest: RR 0.65, 95% CI 0.31-1.36
However, any hint of an association disappears in a sensitivity analysis that only included studies with a low risk of bias.
The use of a checklist was associated with a decrease in hypoxia (RR 0.75, 95% CI 0.59-0.95). Once again, this difference disappears if you only look at the studies at low risk of bias.
There was no change in first pass success (RR 1.05, 95% CI 0.96-1.14).
When they broke it down by the location of the study, they did find a statistically significant association with decreased hypoxia and esophageal intubation in the emergency department, but no differences in the ICU.
So should we throw out our checklists? I don’t think this data tells us, either way. This is certainly not strong evidence that they help, but it also not strong enough evidence to abandon intubation checklists.
The quality of evidence is low. The total numbers involved are bigger than I would have guessed, but the confidence intervals are still pretty huge. (A 35% reduction in peri-intubation cardiac arrest would certainly be clinically significant, despite being statistically insignificant here.) In more than half the studies, other changes were introduced at the same time as the checklist, which could have huge impacts. For example, the first time an intubation checklist was introduced at my emergency department, we were preparing for COVID. In that context, a before and after study would tell us almost nothing about checklists. Although other examples are likely to be less extreme, the introduction of new procedures and equipment concomitantly with the checklists add a significant confounder.
Although you might be tempted to look at the point estimates and assume that checklists will be proven to help if we get more data, those differences all disappear when you exclude the most biased studies. Even without that sensitivity analysis, you have to assume this data set is going to be significantly biased. Before and after studies are notorious for showing benefits that don’t truly exist, just because the participants know they are being watched (the Hawthorne effect). Additionally, the people who are running intubation checklist studies are likely to be the biggest proponents of checklists, so I would expect these studies to be biased towards showing a benefit.
Although improving mortality (and neurologic outcomes) is our ultimate goal, some might argue that it is an inappropriate primary outcome when looking at intubation checklists. It is rare for an intubation to directly cause death, which means it will be even more rare that an intubation checklist will have the opportunity to save a life. The number of participants required to show a reduction in mortality from an intubation checklist would be absolutely astronomical – much bigger than our famous TXA studies like CRASH-2. Thus, I would be happy with a study that demonstrates an improvement in a surrogate outcome, like hypoxia, hypotension, or peri-intubation arrest, especially considering the low cost and minimal harms of the intervention.
That being said, I don’t think that we should downplay the potential harms of checklists. It seems like they should be inherently good, and I have been a big advocate, but there are often unintended consequences in medicine, so it makes sense to study their implementation. (They are not parachutes.) Most of the evidence for checklists comes from time-insensitive settings, like the ORs before elective procedures. The added time it takes to perform a checklist may be detrimental with a critically ill patient. The checklist may add unnecessary steps or complications to a resuscitation. Bad checklists may suggest non-evidence based interventions or unnecessary steps.
Which leads me to my biggest issue: not all checklists are created equal. COVID has been an amazing example of this. I saw hundreds of resuscitation and intubation checklists circulated, but few that I would endorse. Many had too much information to be clinically useful. Many mixed evidence based recommendations with questionable concepts. The variability of quality has a huge impact on the results of this review. A meta-analysis telling us that bad checklists don’t help doesn’t tell us anything about good checklists. To be clear: I don’t know that that happened here. This review doesn’t include any example of the checklists studied, so you will have to go to the original studies if you want to assess the quality of the checklists. The point is that you can’t just study one checklist and assume the results extrapolate to all checklists.
So at the end of the day, I don’t think this systematic review adds a lot. If you are already successfully using a checklist, there is nothing here to suggest that you should stop. If you aren’t, there doesn’t seem to be any evidence that you are causing harm. If you have tried to use a bad checklist, you probably hate them. If you have had the pleasure of using a good checklist, you probably wonder why this is even a question.
There isn’t strong evidence either way when it comes to intubation checklists. They still seem like a good idea to me, but not all checklists are created equal, and I still think it is a good idea to study them, as unintended consequences are common in medicine.
The best talk on checklists you will find:
The First10EM airway series
The kit dump / RSI setup checklist on BroomeDocs (I really like this visual representation in theory, but I have never had the chance to use it clinically):
Turner JS, Bucca AW, Propst SL, et al. Association of Checklist Use in Endotracheal Intubation With Clinically Important Outcomes: A Systematic Review and Meta-analysis. JAMA Netw Open. 2020;3(7):e209278. Published 2020 Jul 1.
Justin Morgenstern. Intubation checklists don’t work?, First10EM, 2020. Available at: