They said it was impossible. They said it would never be done. They said it was unethical. But in the BMJ Christmas edition, we finally have a randomized controlled trial of parachutes!
Yeh RW, Valsdottir LR, Yeh MW, et al. Parachute use to prevent death and major trauma when jumping from aircraft: randomized controlled trial. BMJ (Clinical research ed.). 2018; 363:k5094. PMID: 30545967 [free full text]
This is a randomized controlled trial.
Patients: A convenience sample of individuals 18 years or older, encountered while travelling on an aircraft, and eventually expanded to members of the investigative team, family, and friends. Individuals were asked if they would be willing to jump from the aircraft at its current velocity and altitude. Anyone willing to participate was enrolled in the trial.
Intervention: A parachute was worn while jumping.
Comparison: An empty backpack.
Outcome: A composite outcome of death and major traumatic injury, defined as an injury severity score greater than 15.
23 people participated (out of 92 screened).
There was no difference in the primary outcome. Mortality and major traumatic injury occured in 0% of both groups, both at 5 minutes and 30 days.
The average height of the jump was 0.6 meters with an average velocity of 0 Km/hr. 0% of the parachutes actually deployed.
Obviously, this trial was published as satire in the classic BMJ Christmas edition. It is a fantastic article. I wish I had been smart enough to get this done. However, the article’s impact on medicine will depend a lot on how it is used in coming years.
I think the authors’ commentary is spot on. They state that “the PARACHUTE trial satirically highlights some of the limitations of randomized controlled trials. Nevertheless, we believe that such trials remain the gold standard for the evaluation of most new treatments. The PARACHUTE trial does suggest, however, that their accurate interpretation requires more than a cursory reading of the abstract. Rather, interpretation requires a complete and critical appraisal of the study.”
I couldn’t have said it better myself. If this paper is used as a fun introduction to the nuances of research methodology, its impact can only be positive.
For example, this trial can teach us about selection bias, and the importance of paying attention to the study flow diagram. They screened 92 individuals, but 69 refused to participate. The group that refused to participate was systematically different from the group that did participate, introducing bias into the trial and limiting the generalizability of the results. (“Participants were less likely to be on a jetliner, and instead were on a biplane or helicopter [0% v 100%; P<0.001], were at a lower mean altitude [0.6 m, SD 0.1 v 9146 m, SD 2164; P<0.001], and were traveling at a slower velocity [0 km/h, SD 0 v 800 km/h, SD 124; P<0.001]”) This is why it is always important to consider the flow diagram in trials, and why I am so skeptical of trials that don’t include a flow diagram, such as every trial published to date on endovascular therapy for ischemic stroke.
This trial can also teach us about statistical power. Power calculations are based on assumptions about the patients to be recruited, and those assumptions can be wrong. Here, they assumed a 99% rate of death of major injury in the control arm, based on the assumption that participants would achieve terminal velocity after jumping from about 4000 meters. Instead, they saw a 0% rate of death or traumatic injury in the control group, and an average jump height of 0.6 meters. Clinical trials of truly effective interventions can fail when they study populations that aren’t sick enough to benefit. (This has come up a number of times in these article summaries, such as with antiemetics in emergency department patients.) On the other hand, low event rates lead to increased confidence intervals. In noninferiority trials, this can result in a false claim of “non non inferiority” when two interventions that are actually identical, as we might have seen in the recent cricoid pressure trial.
The authors of this paper emphasize these important scientific learning points. However, I fear that this paper will be cited more often in ways that confuse or undermine science. I have written before about the parachute analogy in medicine, and the tremendous harms that come from eschewing science. The parachute analogy is frequently, and inappropriately, used to deny the need for science. I fear that this paper, although clearly satire, will be used in a similar way. It will be cited to illustrate that science doesn’t work. It will be cited to say that anecdote is better than randomized trials, because it is anecdotes and not RCTs that prove that parachutes work. Instead of a discussion of the nuance and difficulty of science, I fear this paper will be used as a blunt tool of science denial in a society that already seems to be turning its back on science.
Hopefully I am wrong. Hopefully this paper will engage clinicians in evidence based medicine. Hopefully we can learn about the complexities of science. Hopefully this paper will be used as an educational tool; as an example of how easy it is to manipulate trial results, and why it is so important that the people running trials do not have a vested interest in their results. I am hopeful, but I am also skeptical.
I am not jumping out of an airplane anytime soon. If I am forced to, I am using a parachute. I really hope this paper is used for good – as an evidence based medicine learning opportunity. However, I am very worried that this paper will be used to throw aside EBM and push unproven and potentially harmful therapies on our patients. Hopefully the scientifically literate out there will prove me wrong.