Skip navigation
Link to Back issues listing | Back Issue Listing with content Index | Subject Index

Published versus unpublished (Editorial)

Published versus unpublished
Direct versus indirect
Good studies that illuminate the dark, small corners of evidence-based methods are pretty rare, but then three crackers come together all at the same time. So there are no excuses this month for waking up a few neurones to grasp some points that will be important not just for evidence-based medicine, but also for survival-based medicine.

The simple messages are these: that poor quality studies can give the wrong result, that the addition of unpublished studies makes no difference to the result of a review if they are of adequate quality, and that indirect comparisons are legitimate when the studies are of high quality and there is sensible clinical homogeneity. It is all very sensible, and while these points have been obvious to many for some time, confirmation is really important.

Published versus unpublished


At issue is a philosophical point. The reason why unpublished studies were often thought to be different from published studies was that examples chosen had unpublished studies that were small, or of low quality, low validity, or all three. Most of them would never have been in a sensible systematic review in any case. Two good examples from our world of high quality clinical trials show that results of unpublished and published studies are the same.

Direct versus indirect


How do we compare different treatments for the same condition? The ideal is randomised trials making a direct comparison, but these are frequently in short supply, and anyway often are too small to give us any confidence in a result. Indirect comparison with placebo or a common treatment is the answer, and a new analysis confirms that this is a legitimate way to proceed, and tells us when it is legitimate. This is going to begin to change the way evidence is presented.


next story in this issue