Skip navigation
Link to Back issues listing | Back Issue Listing with content Index | Subject Index

Mindstretcher - Checking out systematic reviews

Study
Results
Unpublished reports
Language of publication
Publication in non-MEDLINE journals
Concealment of allocation
Double blinding
Comment
Who guards the guards? Well, actually, we have to when it comes to systematic reviews. We can have good trials, and bad trials, and good reviews and bad reviews. We can also have any combination of these (Figure 1), with different consequences. A bad review of good trials may tell us where the literature is, or at least be a start. A good review of bad trials may tell us what characteristics a good trial should have.

Figure 1: Good and bad trials and reviews



But for the rest it is confusing, and the only defence is a little knowledge, a healthy dose of suspicion, and enough energy to rub a few neurones together. A recent product of HTA [1] helps in refining some of the factors we know or suspect can give rise to bias in systematic reviews.

Study

The raw materials were meta-analyses based on comprehensive literature searches providing sufficient data and information on techniques used to allow replication of the meta-analysis. A comprehensive literature search was one not confined to English, used the Cochrane Library or at least two other databases and had an indicator that unpublished trials had been sought.

Each of 159 meta-analyses was recalculated to produce a statistical outcome like relative risk. A variety of sensitivity analyses were then performed for different characteristics of trials. The results were expressed as the ratio of a statistical outcome like odds ratio between trials with one characteristic and those with another, a ratio of estimates.

Results

One interesting but important result was that the correlation between statistical results in the reports and the recalculated results was close to perfect. The analysis then went on to examine the impact of a number of important factors, summarised in Table 1.

Comparison

Ratio of estimates
(95% CI)

Inadequate or unclear concealment versus adequate concealment of allocation 0.79 (0.70 to 0.89)
Non-English versus English 0.84 (0.74 to 0.97)
Open versus double-blind trials 0.88 (0.75 to 1.04)
Non-MEDLINE versus MEDLINE 0.94 (0.82 to 1.07)
Unpublished versus published 1.07 (0.98 to 1.15)
A ratio of estimates below 1 indicates that trials with the first-mentioned characteristic show a more beneficial treatment effect than trials with the second characteristic. If the estimate is above 1, there is a less beneficial treatment effect.

Unpublished reports

There were 630 published trials and 153 unpublished trials. Unpublished reports were less likely to be double blind. Overall, there was no difference between published and unpublished trials, and the ratio of estimates was 1.1 (95% confidence interval 0.98 to 1.15). If anything, unpublished trials tended to be less beneficial than published trials.

Language of publication

There were 485 English language trials for analysis and 115 in other languages. Reports in languages other than English were less likely to be double blind. Overall, non-English language trials had a more beneficial effect than English studies, with a ratio of estimates of 0.84 (0.74 to 0.97).

Publication in non-MEDLINE journals

There were 580 trials published in journals indexed by MEDLINE, and 161 published in journals not so indexed. There was no difference in the proportion properly randomised and blinded. Overall there was no difference between trials from indexed journals and non-indexed journals, with a ratio of estimates of 0.94 (0.82 to 1.07).

Concealment of allocation

Concealment of allocation here means that the trial is not only randomised, but that researchers have no knowledge of what treatment the next patient will have. Adequately concealed trials will usually have central randomisation, coded drug packs, or assignment envelopes, while inadequately concealed trials use alternation, date of birth, or open random number tables, for instance. It combines randomisation with some elements of blinding.

There were 118 trials with adequate concealment, and 186 in which concealment was inadequate or unclear. Those trials with inadequate or unclear concealment had a more beneficial effect (ratio of estimates of 0.79; 0.70 to 0.89).

Double blinding

There were 237 double blind trials, and 162 not double blind. Overall those trials not double blind tended to have a more beneficial effect, but the ratio of estimates of 0.88 (0.75 to 1.04) did not reach statistical significance.

Comment

There's not much here that we didn't know, but this is perhaps a larger examination of some of these issues than has been done before. It is complicated by issues like whether differences persist when only high quality trials are used. The authors examine this, and by and large they do. But it is a bit circular, and applies only to published or unpublished studies, language and journal of publication.

Any analysis like this is a melange of trials of different interventions in different conditions, and, sometimes, the validity of trials included in meta-analysis can be poor. The amount of data available is also an issue. So we always need to consider a number of issues when looking at a meta-analysis:
  • How good is the searching? If only English language papers are included, some very good material could be missed. Any attempt to find unpublished material is a bonus, but only with stringent inclusion criteria.
  • Are the trials randomised, and are they randomised properly? Many reviews do not accept improperly randomised trials. If you read a review that has poor or non randomised studies, see if there is a sensitivity analysis, and if not put it in the bin.
  • Some studies can be blind, and properly double blind to patient and observer. If unblinded or open studies are included when there are blinded ones available, don't read on.
It really isn't that difficult, but it is always comforting that someone has burnt the midnight oil to confirm what we thought we knew.

References:

  1. M Egger et al. How important are comprehensive literature searches and the assessment of trial quality in systematic reviews. Empirical study. Health Technology Assessment 2003 7:1
previous or next story in this issue