Skip navigation
Link to Back issues listing | Back Issue Listing with content Index | Subject Index

Reporting RCTs

Study
Results
Background
Comment

Randomised trials are the gold standard for evaluating treatment efficacy. How the results are reported may alter how we judge the magnitude or importance of the results of a particular trial. We know that relative risk reduction tends to make us use less conservative judgements than absolute risk reduction, or the number needed to treat. So how do trials report their results? Not great, but getting better seems to be the answer [1].

Study

Five major English-language general medical journals (Annals of Internal medicine, BMJ, Lancet, JAMA and New England Journal) were examined for the years 1989, 1992, 1995 and 1998. The start year of 1989 was chosen because it was the year after the original publication suggesting that NNTs may be preferable to other ways of describing results of research.

All issues of each journal were examined for studies reporting a randomisation process, with binary outcomes, and with a statistically significant treatment effect.

Results

Three hundred and fifty nine articles were examined and met the inclusion criteria. NNT was reported in eight articles and absolute risk reduction in 18 (Table 1). Most of these were in papers published in 1998.


Table 1: RCTs reporting results as NNT or ARR


Year
Total RCTs
NNT
ARR
1989
55
0
0
1992
91
1
3
1995
93
1
5
1998
96
6
10


Background

We know that reporting outcomes as relative risk reduction (or increase) can mislead [2]. David Naylor and colleagues [2] compared clinicians' ratings of therapeutic effectiveness by looking at different end-points presented as percent reductions in relative risk, absolute risk, and numbers-needed-to-treat. The study was conducted using random allocation of questionnaires using relative data or absolute data, each with NNT, among doctors of various grades at Toronto teaching hospitals. They used an 11-point scale anchored at 'no effect' and running from -5 'harmful' to +5 'very effective'.

Relative presentation consistently showed a tendency to higher scores - that is the intervention was interpreted as being more effective (Figure 1). Where data from a single end point, any myocardial infarction, was examined, both relative and absolute comparison was scored consistently higher than NNT presentation of the same data. NNT reporting of the same information produced a reduction of about two points in the effectiveness scale, reducing the judgement from quite effective to one of only slight effect. Subsequent studies generally confirm this [3, 4].


Figure 1: Scoring effectiveness on 'any myocardial infarction' by method of presentation




Comment

Efforts to improve the reporting of randomised trials include the CONSORT statement, published in 1996 and updated subsequently. It is not alone. We also have the QUOROM statement about reporting systematic reviews and meta-analyses, MOOSE for meta-analysis of observational studies, and STARD for reporting studies of diagnostic accuracy. These can all be found in one place, the CONSORT website (http://www.consort-statement.org/. A CONSORT revision in 2001 encouraged the reporting of absolute values and NNT.

Bandolier is awed by the hard work and thoughtfulness of the good folk who prepare these guidelines. Bandolier may not be in complete agreement with all the points in all the statements, but those are pointy-headed academic quibbles. There is little point in publishing reports of trials if ordinary folk like us cannot understand those results. As it is, clinical trials tend to be read less often than narrative reviews [5]. There is a hint that things are getting better (Table 1), and anecdotally NNTs seem to be appearing more often. In the meantime we will have to learn how to make results of research useful and meaningful ourselves. We need useful, understandable, and meaningful results.

References:

  1. J Nuovo et al. Reporting number needed to treat and absolute risk reduction in randomised controlled trials. JAMA 2002 287: 2813-2814.
  2. CD Naylor et al. Measured enthusiasm: does the method of reporting trial results alter perceptions of therapeutic effectiveness? Annals of Internal Medicine 1992 117: 916-921.
  3. M Bobbio et al. Completeness of reporting trial results: effect on physicians' willingness to prescribe. Lancet 1994 343: 1209-1121.
  4. T Fahey et al. Evidence-based purchasing: understanding results of clinical trials and systematic reviews. BMJ 1995 311: 1056-1060.
  5. YK Loke, S Derry. Does anybody read 'evidence-based' articles? BMC Medical Research Methodology 2003 3:14.

previous or next story