Skip navigation

Rules for bullshit detection

Manure can vary from a lot of straw with a little ordure, to a lot of ordure with a little straw. Bandolier would like to advocate this evolving checklist for sniffing out the good from the bad when it comes to single trials of effectiveness.

  1. Is the trial randomised and double-blind? If not, why am I reading it?
  2. How many patients in each treatment group? The smaller the number, the less credible the result.
  3. How big is the effect? If there are small differences which are only of statistical significance, forget it.
  4. Context. Are the patients in the trial like yours?
  5. Is there any pre-existing biology to explain the effect? - the Bayesian drift.
  6. Connoisseurs of manure will always avoid post-hoc sub-group analysis.
  7. Goal post moving should worry you. Examples are failure of treatment explained by not treating the disease early enough, and if lots of adverse effects are associated with a particular intervention, the protagonists then argue that of course they have now improved their technique.



previous or next story in this issue