Skip navigation
Link to Back issues listing | Back Issue Listing with content Index | Subject Index

Mindstretcher - estimating relative efficacy

The problem
The study
Results
Comment
There has been a sort of received wisdom that it is impossible to say anything about the relative efficacy of two different interventions for the same condition unless they are compared directly in head-to-head randomised controlled trials. There will be circumstances where that may well be true, but an important new study [1] indicates that indirect comparisons are likely to be just as good in most cases.

The problem


If we have trials of treatment A versus placebo and treatment B versus placebo, are we able to make any comment on how good treatment A is compared to treatment B, without a trial that directly compares treatment A versus treatment B?

The argument against might well take the form that these were different trials, with different randomisation, and perhaps in patients with different severity of disease at baseline, conducted over different periods of time, and in which different outcomes may have been measured. There may also be reservations about the amount of information available, because with smaller numbers the possibility that the random play of chance may affect the results would inevitably be greater. Any or all of these should invalidate indirect comparisons.

These are important and valid arguments. But if we have trials in which we know the severity is the same or very similar, conducted over the same time, using the same outcome reported in the same way, and we have large enough amounts of information, might we not then be allowed to draw some conclusions?

The study


There were 44 direct and indirect comparisons available for analysis from 28 meta-analyses. The relative risk for the direct comparison (A versus B) was compared with an imputed relative risk of A versus B from studies of A and B versus a common comparator. For some trials an odds ratio, and for others mean differences, were available.

Results


In general, relative risks were the same for direct or indirect comparisons (Figure 1). Most of the results were similar in terms of positive, negative or non-significant effect, with 32 of 44 indirect comparisons giving the same result as the direct comparison.

Figure 1: Indirect and direct comparisons


Of the 12 that were discrepant:
  • eight involved sample sizes so small as to make any conclusion suspect,
  • two involved minor changes to a confidence interval either side of 1, thus changing a statistical rather than any clinical conclusion,
  • one was related to analysis of different doses,
  • one was possibly a really different conclusion.

By another calculation of discrepancy, the authors indicate that three comparisons may be showing different answers, though of these one was related to analysis of different doses, and the other two could also have dose differences as complicating factors.

Comment


Here we have a clear answer to our problem. Indirect comparisons usually agree with direct comparisons. The dangers come from doing something daft, which means using trials of poor quality, or trials that look at different people, with different entry criteria, or different outcomes, over different periods of time, or comparing different doses.

The critical factor for many will be the dose or intensity of an intervention. Many meta-analyses seem to think that different doses of drugs or intensities of intervention, or clinical situations, can be combined with impunity. That defies experience and logic, and is stupid.

The bottom line here is that in the absence of very large direct comparisons, well performed meta-analyses of indirect comparisons are perfectly acceptable, but only when we compare similar interventions, in similar patients, with similar outcomes, measured over similar periods of times. If we don't have that, then mistakes could be made.

References:

  1. F Song et al. Validity of indirect comparison for estimating efficacy of competing interventions: empirical evidence from published meta-analyses. BMJ 2003 326: 472-476.
previous or next story in this issue