Skip navigation
Link to Back issues listing | Back Issue Listing with content Index | Subject Index

Mindstretcher - does unpublished information make a difference?

Study
Results
Comment
A major problem with the old chestnut of publication bias is that of knowing what information is actually unpublished. The argument is that systematic reviews come to positive conclusions only because there is a host of negative reports that are unpublished because they are negative. An additional complication comes from the fact that much of that which is unpublished is of inferior quality. How then is it possible to know whether unpublished data is likely to change the magnitude or direction of any result?

Unless we have published and unpublished studies of equal quality, examining the same outcome, the same intervention in the same population, we have little hope of coming to a satisfactory conclusion. A new analysis of the association of dyspepsia with NSAIDs [1] addresses this issue by comparing published data with unpublished data submitted to the FDA.

Study

Randomised trials of NSAIDs and gastrointestinal toxicity were sought through searches of four electronic databases up to the end of 1997. Any oral NSAID administered for at least four days to adults and compared to placebo was considered if it reported gastrointestinal side effects. The analysis was limited to dyspepsia because this outcome is frequently reported in randomised trials.

All FDA reviews of new drug applications and supplements for naproxen, ibuprofen, diclofenac, etodolac and nabumetone were examined because they were the most frequently prescribed NSAIDs in the USA. Each was searched for randomised trials with the same inclusion and exclusion criteria as for published trials. The possibility of studies being in both domains was examined.

After a literature review, a working definition of dyspepsia was made. This was any outcome terms that related to epigastric or upper abdominal pain or discomfort, and including the term dyspepsia, but specifically excluding nausea, vomiting or heartburn.

Treatment and control group percentages and risk ratios were pooled using a random effects model. A meta-regression analysis was also conducted to determine whether the effect of an NSAID was published or an FDA submission, and also examining age, patient type, exclusion criteria, study reporting quality, and dose and duration.

Results

After excluding studies for various reasons, there were 15 reports (1,455 patients) comparing NSAIDs to placebo and with dyspepsia as an outcome in the published literature. There were 11 comparable reports (2,368 patients) in the FDA reviews. All were randomised, almost all were blinded, and 90% of the published and 80% of the FDA studies had quality reporting scores of 3 out of a possible 5, a level known to minimise bias.

The association of NSAID with dyspepsia in published and unpublished studies was the same (Table 1). This analysis included all doses (high, medium or low, which for ibuprofen was more than 3,200 mg for high, 1,600-3,200 mg for medium, and below 1,600 mg a day). In the meta-regression only dose was related to the rate of dyspepsia, with high dose associated with more than twice the rate of dyspepsia. Being a published or FDA study made no significant difference.

Table 1: Dyspepsia and NSAIDs - published and unpublished studies

Number of Percent with dyspepsia
(95% CI)
  Studies Patients Treatment Placebo Risk ratio
(95% CI)
Published studies 11 2368 4.1 (2.3 to 5.8) 3.2 (1.5 to 4.9) 1.1 (0.7 to 1.6)
FDA studies 15 1455 5.5 (3.1 to 7.9) 3.0 (1.0 to 5.0) 1.1 (0.8 to 1.8)
All studies 26 3923 4.7 (3.3 to 6.2) 3.1 (1.8 to 4.3) 1.1 (0.9 to 1.5)

Comment

What is really interesting here is that 62% of the information was not published, except in FDA reviews. This is not a little bit of unpublished information, but most of it. Yet the addition of unpublished information made no difference to the conclusion.

It is powerful evidence against the arguments about publication bias, though it is but one example. That is so often the problem, that we take a single, often poor, example, and extrapolate it to every eventuality. Even here there is a problem, that different doses have been lumped together, and perhaps they should not have been. The additional meta-regression analysis helps protect against a false conclusion.

The take home bottom line is this: that unthinking bleating about some unknown unpublished data set that must perforce come to an opposite conclusion is probably wrong. Undoubtedly, though, pharmaceutical companies and others should be encouraged to make public that which is unpublished so that any doubt can be removed.

References:

  1. CH MacLean et al. How useful are unpublished data from the Food and Drug Administration in meta-analysis? Journal of Clinical Epidemiology 2003 56: 44-51.
previous or next story in this issue