Skip navigation
Link to Back issues listing | Back Issue Listing with content Index | Subject Index

How good is peer review?

Study
Results
Comment

Bandolier has been struck recently by an upsurge in questions about peer review and the importance of this process in "guaranteeing" quality. That will produce a wry smile in many who are reviewers, or have been the subject of review. Too often it seems to be a pretty haphazard process.

Reviewers are usually busy people, who try to help editors, their professional colleagues, and authors of papers by giving freely of their time to judge manuscripts, and to improve them. Many of us are grateful to reviewers who have helped improve our papers. But just as often unthinking, ignorant or insulting remarks by reviewers drive us to fury. What about the reviewer from a journal at the leading edge of evidence who dismissed a negative systematic review of a procedure because " they tried it once and it seemed to work"!! And the editor accepted it!!

All too often accepting or rejecting submitted papers seems to be little less than the random play of chance. A new study in neuroscience confirms just that [1].

Study


Two journals that routinely sent manuscripts to two reviewers allowed access to the assessments of these manuscripts. One journal provided information on all manuscripts over a six-month period (179), and the other provided information on 116 consecutive manuscripts. Both journals used a structured assessment, and assessors were asked to make the judgements:


Agreement between reviewers was assessed using the kappa statistic. A value of 0 represents chance agreement, and a value of 1 perfect agreement. Scores of 0 to 0.2 are considered very poor, those between 0.2 and 0.4 poor, between 0.4 and 0.6 moderate, between 0.6 and 0.8 good and between 0.8 and 1 excellent.

Results


Agreement was not good (Table 1), and was not convincingly better than chance for either journal for acceptance, revision or rejection, or high, medium or low priority.

Table 1: Reviewers for two neuroscience journals failed to agree on quality and priority of manuscripts

  Interobserver agreement (kappa)
Journal Accept or reject Priority
A 0.08 (-0.04 to 0.20) -0.12 (-0.30 to 0.11)
B 0.28 (0.12 to 0.40) 0.27 (0.01 to 0.53)
Kappa values of 0 to 0.2 show very poor agreement, 0.2 to 0.4 poor, 0.4 to 0.6 moderate, 0.6 to 0.8 good and 0.8 to 1.0 excellent

Comment


Problems with peer review are not new. The paper has a lively discussion relating to other areas in science and medicine, and some of the attempts that have been made to improve matters. We can take some comfort from the fact that work is in progress to improve matters, but miracles are unlikely and peer review will remain a flawed process for some time to come. That means we have to accept that publication, and, often, grant applications, will remain something of a lottery.

It is a shame. It explains why complete rubbish appears in the best of journals, and why superb and important research can be hard to publish. The lesson is to keep submitting, because eventually by chance two reviewers will love it.

Reference:

  1. PM Rothwell & CN Martyn. Reproducibility of peer review in clinical neuroscience. Is agreement between reviewers any greater than would be expected by chance alone? Brain 2000 123: 1964-1969.
previous or next story in this issue