This is a collection of papers that I found useful when trying to understand how to critically appraise papers.
The key questions to ask when a trials primary outcome is positive:
- Does a P value of
- What is the magnitude of the treatment benefit?
- Is the primary outcome clinically important (and internally consistent)?
- Are secondary outcomes supportive?
- Are the principal findings consistent across important subgroups?
- Is the trial large enough to be convincing?
- Was the trial stopped early?
- Do concerns about safety counterbalance positive efficacy?
- Is the efficacy–safety balance patient-specific?
- Are there flaws in trial design and conduct?
- Do the findings apply to my patients?
The sister article the one above. Here, they address the key questions to ask when the primary outcome is negative:
- Is there some indication of potential benefit?
- Was the trial underpowered?\
- Was the primary outcome appropriate (or accurately defined)?
- Was the population appropriate?
- Was the treatment regimen appropriate?
- Were there deficiencies in trial conduct?
- Is a claim of noninferiority of value?
- Do subgroup findings elicit positive signals?
- Do secondary outcomes reveal positive findings?
- Can alternative analyses help?
- Does more positive external evidence exist?
- Is there a strong biologic rationale that favors the treatment?
Ridgeon EE, Young PJ, Bellomo R, Mucchetti M, Lembo R, Landoni G. The Fragility Index in Multicenter Randomized Controlled Critical Care Trials. Critical care medicine. 44(7):1278-84. 2016. PMID: 26963326
This paper covers the fragility index: an important statistical test when there are small numbers of events in trials. The Fragility Index is the minimum number of patients whose status would have to change from a nonevent to an event that is required to turn a statistically significant result to a nonsignificant result
Wasserstein RL, Lazar NA. The ASA’s Statement on P-Values: Context, Process, and Purpose. The American Statistician. 70(2):129-133. 2016. [article]
This is the American Statistical Association’s statement on p values. Some important points they make:
- P values are do not tell you about your hypothesis. They just tell you how incompatible your data set is with your hypothesis.
- The P value does not tell you the probability that your hypothesis is true.
- Scientific conclusions and policy decisions SHOULD NOT be based soley on on whether p values pass a specific threshold.
- To understand the p value, you need to have complete transparency about the research. You need to know how many statistical analyses were run; how many comparisons were made. This is getting at the major problem of p-hacking.
- A p value does not give you any sense of the effect size. In other words, you can have statistically significant results that are tiny and completely clinically insignificant.