☰ MENU
Peer Review: Reviewed

Jamie Baross asks: Has our belief in science been misplaced, and can we place our trust in the peer-review process?

Image:AJ Cann
Image:AJ Cann

More than ever, our civilisation has become dependent on complex technologies. Politicians navigating international conflicts over chemical weapons or nuclear proliferation rest their credibility on the scientific claims of UN weapons inspectors. Most importantly, we face a changing climate that poses an existential risk to us all. How we respond depends on our knowledge of the environment, and on the technologies available.

Nonetheless, science today faces vicious attacks on its legitimacy. Climate scientists (even students on occasion) are regularly subjected to a slurry of hate mail, and have been targeted by mail bombs after speaking publicly about their work.

Across the whole political spectrum, anti-science narratives are on the rise. Some seek to discredit scientists themselves, with accusations of institutionalised corruption, while others go as far as to challenge the accepted notions of human knowledge or civilisation itself.

The world needs credible scientific advice more than ever. Yet, the institutions of science stand on increasingly shaky ground. The critics of science are not just uninformed outsiders; there are rising numbers of scientists dissatisfied with the state of the discipline’s most sacred institutions.

The credibility of science rests on two pillars: the scientific method by which discoveries are pursued, and peer review – the process that weeds out bias, self-interest, and simple bad science.

The traditional role of peer review is to restrict which papers make it to journals. Before a scientist can publish their work, it must be submitted to a panel of their (anonymous) peers – those with the expertise to critique the work and weigh it against other submitted works. Theoretically, only the best and most interesting work is published.

Image: {a href="https://pixabay.com/en/research-laboratory-scientific-853474/"}Juliacdrd{/a}

Unfortunately the reality looks increasingly far from this ideal. The contemporary ‘panel of your peers’ is an illusion. Our knowledge has become so vast and our fields of study so specialised that reviewers (mostly volunteers) are inevitably faced with piles of papers to review, of which only a small fraction they can really claim peerage to. The response has been to rely on seemingly objective statistical measures – the author’s previous citation numbers, and impact factors. However, these numbers distort more than they help the assessment of papers.

The number of citations a scientist’s work acquires is neither a proxy for quality nor importance. No differentiation can be made between citations that are the central starting point of another scientist’s investigation, and footnotes, the removal of which takes nothing of significance from the work.

Citation numbers are influenced as much by passing trends and other historical conditions as they are by lasting contributions to human understanding. Where a particular sub-field suddenly becomes fashionable, even trivial contributions gain large numbers of citations as part of a continual self-referential chatter. Alternatively, a paper of dense mathematics in a niche field might represent a significant analytical contribution to fundamental science that, even if it forms the vital foundation of a transformative technology or technique, will often garner few citations.

Impact factors attempt to measure journal quality. They do this by giving the average number of citations per paper in a journal over the last two years. Many see this as a flawed measure – respected biologist Stephen Curry proclaimed, “if you use impact factors you are statistically illiterate.” First, it’s being wrongly applied. It was originally conceived to help libraries choose which expensive journal subscriptions to spend their meagre budgets on. Today, researchers and their work are judged on the journals they have previously published in.

Image: {a href="https://pixabay.com/en/books-science-shelf-library-book-408220/"}Delphinmedia{/a}

Furthermore, this average has very little connection to any given paper published in a journal. The impact factor is massively skewed by the small number of ‘top’ articles in a journal: for Nature, 89% of the citations in 2004 resulted from just 25% of papers published. Even if counting citations was not problematic, the impact factor of a specialist journal tells a reviewer outside of the field little about its quality or the prestige to be attributed to publishing within.

The result of all this is a system which tends to reinforce and prolong whatever happens to be the latest fashion in research. This damages the connection between quality of science and its publication more generally.

These problems mire not only the fairness of publishing, but also the funding of research and careers of scientists. Academic departments are not managed by academics, but, increasingly, a new class of professional managers.

With squeezed budgets, management is trying to extract the most valuable research they can. However, with minimal scientific knowledge, judging where to direct funds or who to employ requires another metric. Managers might not know much about magnetohydrodynamics, but they feel confident working with targets and statistics. The result is obvious: a researcher’s worth is now measured by their citation counts, and their work is constrained by what appeals to high impact-factor journals.

Distrust in the use of impact factors is rising, even amongst those prominent journals that benefit from it. Yet the problem of how we can conduct research in a specialised world remains, with some proclaiming radical new directions. Post-publication-peer-review would turn the role of peer review on its head. Advocates suggest anything that meets very basic technical standards should be published freely. The best would be sorted from the rest afterwards, allowing the opinions of actual ‘peers’ to come to the fore. Regardless, the future of scientific practice looks turbulent.

Jamie Baross is a Physics student at the University of Warwick, and the former Editor of the Warwick Globalist’s Science & Technology section.

The Warwick Globalist is launching a month-long crowd funding campaign to raise £1,200 in four weeks. We are doing it primarily to reduce our reliance on corporate sponsorship. For more information, and to donate, please go to: http://bit.ly/warwickglobalist

Leave a Reply

Your email address will not be published. Required fields are marked *