Skip navigation

Random lottery and placebo bounce

People rightly spend a lot of time calling for higher quality both in original research and in systematic reviews. The issue of quality is not just academic babbling - it can really affect our decision-making.

breast cancer screening came down to whether the trials were properly randomised or not. Bandolier thought the authors made their case that six of eight trials could not have been properly randomised. Based on two studies that were properly randomised, screening was shown to be ineffective. Just imagine, then, seeing another review, on pneumococcal vaccination , which specifically includes both improperly randomised studies and those clinically irrelevant to make the point that pneumococcal vaccines work.

People writing reviews should be free to include or exclude studies, perhaps for legitimate reasons, but must say what they are doing. There should be a bottom line from unbiased, relevant studies to support this, else what we are left with is a lottery. As Bandolier has pointed out before, all systematic reviews are not equal. Caveat lector again.

Placebo bounce

When a patient is given a placebo, and we measure an outcome, we frequently call this a placebo effect. The "placebo effect" is really just a shorthand way of saying that this is the size of the response we had with placebo. The trouble is that the shorthand is often turned around, so that causality is implied. We gave placebo, we had this effect, so placebo caused it.

Systematic reviews of placebo responses are just arriving. One, in this month's Bandolier , examines placebo responses in studies of reflux oesophagitis (though curiously only up to 1990). There is much to be learned from the variability and extent of responses when the treatment is actually doing nothing. We will look out for more.

next story in this issue