Thursday, October 01, 2009

Peer Reviewers Get Worse, Not Better, Over Time

Almost all peer reviewers get worse, not better, over time.

So suggests a study presented at the Sixth International Congress on Peer Review and Biomedical Publication in Vancouver, Canada, and reported by Nicola Jones in the October 2009 issue of Nature. In his paper "The Natural History of Peer Reviewers: The Decay of Quality" Michael Callaham, editor-in-chief of the Annals of Emergency Medicine in San Francisco, California, reported his analysis of the scores that 84 editors at the journal had been given by nearly 1500 reviewers between 1994 and 2008.

The journal routinely has its editors rate reviews on a scale of one (unsatisfactory) to five (exceptional). The average score stayed at roughly 3.6 throughout the entire period. The surprising result, however, was how individual reviewers' scores changed over time: 93% of them went down, which was balanced by fresh reviewers who kept the average score up. The average decline was 0.04 points per year.

As quoted by Jones, Callaham said "I was hoping some would get better, and I could home in on them. But there weren't enough to study." According to Callaham, less than 1% improved at any significant rate, and even then it would take 25 years for the improvement to become valuable to the journal.

Jones also notes that Callaham agrees that a select few senior advisers are always very useful. But from his own observation, older reviewers do tend to cut corners. Young reviewers assigned a mentor also typically scored half a point better than non-mentored colleagues, but when the mentor's eye disappeared after a year or so, the advantage evaporated.
Sphere: Related Content

No comments: