Mindstretcher  being indirect 


Indirect method?
Back in 1997 a group from McMaster came up with a method that allowed the calculation of odds ratios or relative risk of A versus B when we have only A versus C and B versus C trials [1]. Essentially it takes the ratio of the log odds of A versus C and B versus C studies.
There are lots of equations, and it is not easy to get a simple brain around it. Even though it looks sensible, much still depends on the data sets to which methods might be applied. Statistics can't rescue us from inadequate or insufficient evidence. So three examples to look at how we might expand our thinking on indirect comparisons.
1 P carinii pneumonia [1]
This was the original data set on which the indirect calculation of odds ratios was based. The setting was a systematic review of antibiotic regimens for the prevention of P carinii pneumonia in patients with HIV infection. There were two experimental therapies, trimethoprimsulphamethoxazole (TMP) and dapsone/pyrimethamine (DP) and a standard therapy of aerosolised pentamidine (AP).
Results of the trials in direct comparisons is shown in Table 1. TMP was better than AP, DP was no different from AP, and TMP was better than DP. Odds ratios calculated using the new method from indirect comparisons was close to those from direct trials. Figure 1 shows an abacus plot of the single treatment arms for TMP, DP and AP. Overall, P carinii pneumonia occurred in 5.5% (95% CI 4.4 to 6.7%) of 1484 patients taking TMP, 9.0% (7.6 to 10.4%) of 1547 patients taking DP and 9.9% (8.3 to 11.5%) of 1331 patients taking AP.
Table 1: Information on P carinii pneumonia in direct and indirect randomised comparisons of TMP, DP and AP
Number/total (%) with outcome  
Comparison  Number of trials  Treatment  Comparator  Relative risk  NNT 
TMP vs AP  9  26/681
(4.0) 
74/613
(12.3) 
0.35 (0.23 to 0.53)  12 (9 to 19) 
DP vs AP  5  51/732
(7.0) 
58/718
(8.1) 
0.89 (0.62 to 1.27)  N/A 
TMP vs DP  8  56/803
(7.1) 
88/815
(10.9) 
0.66 (0.48 to 0.90)  26 (15 to 96) 
N/A = not applicable 
Figure 1: Percent with P Carinii pneumonia with AP, TMP and DP in direct and indirect randomised trials 

Number/total (%) with outcome  
Comparison  Number of trials  Treatment  Comparator  Relative risk  NNT 
Risperidone v haloperidol  7  225/1573
(14) 
68/404
(17) 
0.81 (0.63 to 1.03)  N/A 
Olanzapine v haloperidol  3  375/1860
(20) 
239/786
(30) 
0.68 (0.59 to 0.78)  10 (7 to 15) 
Risperidone v olanzapine  1  24/172
(14) 
28/167
(17) 
0.83 (0.50 to 1.37)  N/A 
N/A = not applicable 
In the direct comparison for this outcome, risperidone failed to beat haloperidol, while olanzapine did beat it. There was no difference in the direct comparison. A difficulty was that the rate of discontinuations for lack of efficacy with haloperidol in the olanzapine trials was quite a lot higher than in the risperidone trials. An abacus plot of data from all treatment arms (Figure 2) emphasises the dependency on some large trials. Overall, lack of efficacy withdrawal occurred in 14% (13 to 16%) of 1745 patients on risperidone, 20% (18 to 22%) of 2027 patients on olanzapine and 26% (23 to 28%) of patients on haloperidol. Figure 2: Percent discontinued because of lack of efficacy in randomised comparisons of risperidone, olanzapine and haloperidol 

Number/total (%) with outcome  
Comparison  Number of trials  Treatment  Comparator  Relative risk  NNT 
1000 mg + 60 mg  3  65/114
(57) 
9/83
(11) 
4.8 (2.6 to 8.8)  2.2 (1.7 to 2.9) 
600/650 mg + 60 mg  13  191/398
(48) 
78/418
(19) 
2.5 (2.0 to 3.1)  3.4 (2.8 to 4.3) 
300 mg + 30 mg  4  56/215
(26) 
14/164
(9) 
3.2 (1.8 to 5.6)  5.6 (4.0 to 9.8) 
Figure 3: Percent with outcome of at least 50% pain relief over 46 hours for placebo, paracetamol 300 + codeine 30, paracetamol 600 + codeine 60 and paracetamol 1000 + codeine 60 

Other things that could be done would include assessing how accurate we can be with this level of efficacy and amount of data (92% confident that we are within ±0.5 of the true NNT). We'd need data from 100 more patients to be at least 95% confident. In three trials with active controls there was information on another 117 patients given paracetamol/codeine at the doses interesting to us, and they had a similar rate of pain relief. In another six trials omitted from the metaanalysis because of technical problems with measurement scales rather than design issues that could lead to bias, the combination of paracetamol and codeine was better than placebo or comparator on at least one measure. So the sparse data in indirect studies can be supplemented with considerable amounts of information from other highquality trials. CommentIndirect comparisons make for the biggest problems and arguments. The bottom line is that there is no doubt but that best information will come from large, properly constructed randomised trials, using valid outcomes, and done in a way that is meaningful to clinical practice. In most cases this is nothing more than baying for the moon. When we need to make decisions now based on the information we have, we will be forced to look at indirect comparisons. The simple rule is that quality cannot be compromised. Data from trials prone to bias because of faulty design won't help us, and may drive us to an incorrect conclusion. Then we have to use outcomes that make sense. And we need sufficient numbers of patients and events to overcome any random effects. After that, we're probably on our own, though indirect odds ratio calculations may be useful [1] in some circumstances. Abacus plots of single trial arms can be useful backups, but have the problem of losing the advantage of randomisation unless there is excellent clinical homogeneity to begin with (and even then use them with caution until we know more). References:
