EBM bibliography updates – April 2023

EBM title
Cite this article as:
Morgenstern, J. EBM bibliography updates – April 2023, First10EM, May 1, 2023. Available at:

Although incredibly nerdy and somewhat difficult to digest, I think the EBM bibliography might be the most valuable resource I have created, at least for those with any interest in learning about evidence based medicine. I continue to (slowly) update this resource. These are the new papers that I have added in the past few months:

Jureidini J, McHenry LB. The illusion of evidence based medicine. BMJ. 2022 Mar 16;376:o702. doi: 10.1136/bmj.o702. PMID: 35296456

  • A pretty scathing (but in my mind spot on) review of the corruption of science that occurs when we allow industry involvement.
  • “The release into the public domain of previously confidential pharmaceutical industry documents has given the medical community valuable insight into the degree to which industry sponsored clinical trials are misrepresented. Until this problem is corrected, evidence based medicine will remain an illusion.”
  • “Scientific progress is thwarted by the ownership of data and knowledge because industry suppresses negative trial results, fails to report adverse events, and does not share raw data with the academic research community. Patients die because of the adverse impact of commercial interests on the research agenda, universities, and regulators.”
  • “What confidence do we have in a system in which drug companies are permitted to “mark their own homework” rather than having their products tested by independent experts as part of a public regulatory system?”
  • “Our proposals for reforms include: liberation of regulators from drug company funding; taxation imposed on pharmaceutical companies to allow public funding of independent trials; and, perhaps most importantly, anonymised individual patient level trial data posted, along with study protocols, on suitably accessible websites so that third parties, self-nominated or commissioned by health technology agencies, could rigorously evaluate the methodology and trial results.”

Kennedy AG. Evaluating the Effectiveness of Diagnostic Tests. JAMA. 2022 Mar 18. doi: 10.1001/jama.2022.4463. Epub ahead of print. PMID: 35302590

  • When evaluating clinical tests, there are 3 major considerations (and we often forget the last 2):
    • Accuracy: A test must be accurate (often measured by sensitivity and specificity, although I think there are better measures). Accuracy alone is not enough to warrant a test. Many accurate tests actually lead to patient harm. Accuracy is a necessary but not sufficient criterion. 
    • Clinical Utility: The test must have a measurable net positive effect on a patient’s clinical outcomes (many of our tests, like stress tests and BNP fail at this level)
    • Patient benefit: This last criteria is very questionable, and probably should just be wrapped up into clinical utility. The authors try to distinguish a benefit to patients, even when a test does not  directly influence clinical treatment decisions or prognosis. The example would be identifying a cancer that has no treatment, and therefore cannot impact “clinical decisions”, but might impact a patient’s life choices. This is a bit of a slippery slope. I firmly believe that tests that will not change a patient’s management should not be ordered. However, I believe that important life decisions, such as choices about end of life care, finances, and general well being, are firmly within the clinical realm, and represent potential clinical benefit. Therefore, in my mind, clinical utility and patient benefit are probably best thought of as the same thing.

Clarke M. The true meaning of DICE: don’t ignore chance effects. J R Soc Med. 2021 Dec;114(12):575-577. doi: 10.1177/01410768211064102. PMID: 34935558

  • Chance findings affect every perfectly designed RCT. The traditional threshold of p = 0.05 will lead to ‘statistically significant’ differences with almost the same frequency as people rolling 11 with a pair of dice. The problem becomes even worse if multiple analyses are done and the one with the most striking difference, or lowest p-value, is elevated to become a key result of the trial.”
  • This paper describes the “DICE trials”, which are incredible demonstrations of problems with medical science.
  • In DICE1, participants decided whether patients lived or died by rolling a normal 6 sided die. Obviously, the results should be the same in both the ‘treatment’ and the ‘control’ groups, but using standard techniques used in many meta-analyses (such as retrospectively eliminating some trials that were negative), they ended up with a conclusion that the ‘intervention’ (there wasn’t one) “showed a statistically significant decrease in the odds of death of 39% (95% CI: 60% decrease to 8% decrease, p = 0.02).”
  • DICE 2 and 3 show similar things with computer simulated models. Essentially, there are lots of statistical positive meta-analyses, even when the data is purely random.
  • They hint at a form of bias that I have not seen described before. The decision to perform a meta-analysis is not random. They are often under-taken after positive or exciting results. This simulated data shows that even a single positive study significantly increases the risk of a meta-analysis being false positive by random chance. 

Baraldi JH, Picozzo SA, Arnold JC, Volarich K, Gionfriddo MR, Piper BJ. A cross-sectional examination of conflict-of-interest disclosures of physician-authors publishing in high-impact US medical journals. BMJ Open. 2022 Apr 11;12(4):e057598. doi: 10.1136/bmjopen-2021-057598. PMID: 35410932

  • Both JAMA and the New England Journal have clear rules on conflict of interest: you are supposed to report them. Unfortunately, these rules are not followed.
  • This study looked at 31 RCTs from each journal. The 118 total authors received a total of 7.5 million dollars from industry over the 3 year study period.
  • Of the 106 authors who received industry payments, 106 (90%) failed to disclose at least some of that money. 51 (48%) left more than half of their received funds undisclosed. (This is likely an under-estimate, because this only accounts for funds openly disclosed on the Open Payments system.)
  • Even if these disclosures worked (they don’t), you can’t trust them because the authors failed to disclose significant conflicts. This is also a pathetic failure of these two supposed top tier journals, as all these conflicts are openly reported, and could be checked in seconds with a simple search.

Taheri C, Kirubarajan A, Li X, et al. Discrepancies in self-reported financial conflicts of interest disclosures by physicians: a systematic review. BMJ Open 2021;11:e045306. doi: 10.1136/bmjopen-2020-045306

  • This is a systematic review and meta-analysis looking at discrepancies in financial conflict of interest reporting. They found 40 studies that looked at this issue (which each individually looked at larger numbers of clinical guidelines, published papers, or academic meetings.)
  • Discrepancies between reported conflicts and those identified through objective payment databases were very common. It depends exactly how you sort the data (one author can be on multiple papers so it is unclear how many time you should count them), but between 80 and 90 percent of reported financial conflicts of interest were discrepant when comparted to an objective database!
  • Studies with discrepancies were much more likely to report positive outcomes (odds ratio 3.21)
  • Bottom line: don’t trust declared fCOIs. They are almost always wrong. We need to get rid of this system where we allow people with financial conflicts to play such a prominent role in research.

Kataoka Y, Banno M, Tsujimoto Y, Ariie T, Taito S, Suzuki T, Oide S, Furukawa TA. Retracted randomized controlled trials were cited and not corrected in systematic reviews and clinical practice guidelines. J Clin Epidemiol. 2022 Oct;150:90-97. doi: 10.1016/j.jclinepi.2022.06.015. Epub 2022 Jun 30. PMID: 35779825.

  • There are many reasons to be cautious in your interpretation of systematic reviews and clinical practice guidelines. This paper tackles the issue of how these documents handle papers that have been retracted.
  • They identified 587 systematic reviews or guidelines that cited a study that had been retracted. 252 of these reviews were published after the retraction, meaning the authors of the review should have known the paper there were citing had been retracted. Not one of these reviews / guidelines corrected themselves after publication. 
  • 335 were published before the retraction. This is a more difficult situation, as I don’t expect researchers to constantly fact check prior publications – but journals probably do have some responsibility. 11 (5%) of these publications corrected or retracted their results based on the retraction. 
  • Bad, or even fraudulent, science can make its way into systematic reviews. If you are thinking about changing practice, I think you should always read the base literature.

Haslberger M, Gestrich S, Strech D. Reporting of retrospective registration in clinical trial publications: a cross-sectional study of German trials. BMJ Open. 2023 Apr 18;13(4):e069553. doi: 10.1136/bmjopen-2022-069553. PMID: 37072362

  • Although trial registries are clearly good in theory, we have pretty good evidence that they are mostly failing in practice.
  • This study looked a German trial registry, and found than more than half of the trials were registered retrospectively (completely eliminating the value of registration). Less than 5% mention this retrospective registration in the published manuscript.
  • The one positive finding in this study is that retrospective registration is trending down with time, from 100% in the 1990s, to only about 25% in 2017. Unfortunately, 25% is still far too high, and there are still many other problems with these registries (such as the fact that journals apparently never look at them).

Dash K, Goodacre S, Sutton L. Composite Outcomes in Clinical Prediction Modeling: Are We Trying to Predict Apples and Oranges? Ann Emerg Med. 2022 Jul;80(1):12-19. doi: 10.1016/j.annemergmed.2022.01.046. Epub 2022 Mar 24. PMID: 35339284.

  • This paper provides a nice discussion of both the benefits and problems with composite outcomes.
  • Some potential benefits:
    • Increased statistical efficiency
    • The ability to increase event rates when individual event rates are low
    • Improved research efficiency
    • You might notice that the benefits are all about getting research done cheaper or faster, but not about validity or scientific soundness.
  • Some potential harms:
    • “The construction of composite outcomes often lacks logic and is susceptible to post hoc choosing, or “cherry picking”, of favorable combinations of outcomes”. (See also section on p-hacking).
    • Benefit might be driven by the less important part of the composite. (For example, we see many study claiming a decrease in major adverse cardiac events where these is no change in death or MI, and the only change in is revascularization.)
    • Often make the assumption of uniform directionality. Ie, effects observed on separate components of a composite outcome may not be in the same direction.
      • Particularly bad if the qualitative value of the outcomes is different. Ie, a treatment that reduces symptoms but increases mortality might look good on composite outcomes, because the symptom signal outweighs the mortality signal.
      • Can also under-estimate benefits if the composite includes an outcome with no effect with one with a real effect. 
    • Outcomes are often not patient oriented, or irrelevant outcomes are combined with important outcomes.
    • There is something called competing hazards bias, in which one outcome can influence the others. For example, if you die, you can’t possibly later develop coronary artery disease. 

Leave a Reply

3 thoughts on “EBM bibliography updates – April 2023”

%d bloggers like this: