Skip navigation
Link to Back issues listing | Back Issue Listing with content Index | Subject Index

Adverse events: can we trust the data?

Reporting adverse events
Results
Comment
Further thoughts
Establishing how good a new intervention really is is relatively straightforward, but there has been a more casual attitude about measuring harm from treatments. It is appropriate that the way harm is measured and reported in randomised trials is now getting more attention [1], and now getting more attention in clinical trial design as well.

Study


The study [1] set out to examine adverse event reporting in seven medical areas. These were HIV therapy, antibiotics for acute sinusitis, thrombolysis for acute myocardial infarction, NSAIDs for rheumatoid arthritis, hypertension in the elderly, treatment of Helicobacter pylori with antibiotics and selective decontamination of the gastrointestinal tract. Trials for each topic were identified from systematic reviews and meta-analyses and comprehensive databases of randomised trials. Meta-analyses were not updated.

Reporting adverse events


Adverse event reporting was examined both qualitatively and quantitatively, based on criteria previously defined for HIV [2]:


Based on these criteria, two components were selected:

  1. Whether the number of withdrawals and discontinuations because of adverse events were reported, and whether the number was given for each specific type of adverse event leading to withdrawal.
  2. Whether the severity of the described adverse event (or abnormalities of laboratory tests) were adequately defined, partially defined, or inadequately defined.

To be adequate, a detailed description of the severity or reference to a known scale of severity needed to be given, with separate reporting of at least severe or life-threatening events, and with at least two adverse events defined in this way with numbers for each study arm.
To be partially adequate, reports combined moderate with severe or life threatening, or had separate reporting for one of many reported adverse events.
To be inadequate, reports gave the total number of severe adverse events without giving details on specific types, lumped together all grades, or gave only generic statements, or had no information at all on adverse events.

Results


There were 192 randomised trials in the analysis, 61% of which were additionally double-blind. The total number of patients was 130,000. Most of the trials were published before the 1990s, though some were published as recently as 1999. About a third were published in journals with impact factors of 7 or more, so they were by no means all published in obscure places.

The number of discontinuations per study arm was reported in 75% of trials, though the reason for the discontinuations per study arm was given in only 46% of trials (Table 1). The best clinical areas for reporting the number and reason for adverse event discontinuations were antibiotics for acute sinusitis, and arthritis.

Table 1: Percent of trials with different adverse event reporting outcomes

Reporting of safety Percent of trials Range
Discontinuations because of harm  
Number per arm given 75 30-100
Reasons per arm given 46 20-68
Clinical adverse events  
Adequate reporting 39 0-62
Partially adequate reporting 11 0-20
Inadequate reporting 50 22-100
Laboratory defined toxicity  
Adequate reporting 29 0-62
Partially adequate reporting 8 0-20
Inadequate reporting 63 25-100
Range refers to the limits found in each of seven clinical areas.

Adequate reporting of clinical adverse events was found in 39% of trials, partially adequate reporting in 11%, and inadequate reporting in 50% of trials (Table 1). The best areas for reporting clinical adverse events (adequate plus partially adequate) were thrombolysis for myocardial infarction, and arthritis.

Adequate reporting of laboratory adverse events was found in 29% of trials, partially adequate reporting in 8%, and inadequate reporting in 63% of trials (Table 1). The best areas for reporting clinical adverse events (adequate plus partially adequate) were treatments for HIV, and arthritis.

Comment


This business of adverse event reporting is both difficult and important. Importance is obvious: patients and professionals need to know the likelihood of a treatment not only being effective, but also producing harm. Harm can be common, mild, and reversible. It could be rare, major and irreversible. Individuals will view their importance differently. A flautist may view with dismay a treatment causing dry mouth, while others of us will simply drink more. A man or woman in their 30s with a family depending on them will think differently about a risk of death of 1 in 1000 than will a single person in their 70s, even if the benefit is the same. Truly one man's meat is another man's poison.

But as this report [1] shows, the chances of us being well informed even from good quality randomised trials for efficacy is slight, because of deficiencies in recording or reporting adverse events. There were some obvious results. That HIV treatments did well on laboratory-defined adverse events was not surprising, because in HIV surrogate laboratory measures are important. That arthritis trials did well on clinical adverse event reporting (and discontinuations) was not surprising because NSAIDs are known to cause gastrointestinal bleeding.

What is not surprising to those who undertake systematic reviews, and therefore read many trial reports, is that overall adverse event reporting was poor. One important lesson, though, is that one type of adverse event, that of the number of discontinuations per study arm, is the best reported, and one that should therefore feature highly in systematic reviews as a useful marker of overall toxicity of a treatment.

Further thoughts


It isn't just reporting adverse events that is important. This paper [1] and others [3] suggest improvements to the ways that adverse events can be reported in clinical trials and even suggest amendments to CONSORT [4], a set of guidelines about reporting clinical trials that was a bit light on how it treated adverse events.

But there may be even more fundamental ways in which our knowledge is deficient. For instance, the way in which adverse events are recorded can give rise to significantly different rates [3]. Which is right? Difficult to know. Then there's the problem that adverse event recording for long-term treatments can include things like chest infections and other common minor ailments that have nothing to do with treatment. Unless there is some sifting, the results merely serve to confuse. For some conditions like migraine, we measure efficacy over one day and harm over one week. Why? What relevance does this have?

One thing is certain. Despite there being significantly important lessons about harm that can be gleaned from systematic reviews of the literature [5 and 6 are examples], there is much we do not yet know. And we need it, especially if we are to inform professionals and the people they treat so that truly informed decisions can be made.

References:

  1. JP Ioannidis, J Lau. Completeness of safety reporting in randomized trials. An evaluation of 7 medical areas. JAMA 2001 285: 437-443.
  2. JP Ioannidis, DG Contopoulos-Ioannidis. Reporting of safety data from randomized trials. Lancet 1998 352: 1752-1753.
  3. JE Edwards et al. Reporting of adverse events in clinical trials should be improved: lessons from acute postoperative pain. Journal of Pain and Symptom Management 1999 18: 427-437.
  4. Begg et al. Improving the quality of reporting of randomized controlled trials: the CONSORT statement. JAMA 1996 276: 637-639.
  5. MR Tramèr et al. Propofol and bradycardia - causation, frequency and severity. British Journal of Anaesthesia 1997 78: 642-51.
  6. MR Tramèr et al. Quantitative estimation of rare adverse effects which follow a biological progression - a new model applied to chronic NSAID use. Pain 2000 85: 169-182.
previous story in this issue