science20.com

The Problem With Peer Review

In a world where misinformation, voluntary or accidental, reigns supreme; in a world where lies become truth if they are broadcast for long enough; in a world where we have unlimited access to superintelligent machines, but we prefer to remain ignorant; in this world we are unfortunately living today, that is, the approach taken by scientists to accumulate knowledge - peer review - is something we should hold dear and preserve with care. And yet...Yet the peer review system is crumbling. While it remains science’s immune system -- a decentralized, volunteer-driven process that filters bad data, strengthens good ideas, and pushes researchers to improve before ideas are declared part of the record -- it has recently become overloaded, is under hacking attacks, and is even starting to attack itself, like an autoimmune response.

There are multiple causes of concern. First, the system is based on academics doing _pro bono,_ unpaid work, for the sake of the advancement of good science. They are asked to review work of unknown colleagues and receive not even a pat in the back as a reward. But reviewing papers is a time-intensive job, and researchers are overcommitted: they already work 60-hour weeks or more, to push their own research while also attending to a large number of mandatory tasks (teaching, grant application writing, institutional responsibilities, mentoring, etcetera). So when scientists receive a request to review a manuscript, the odds they accept the task are very low.

In the face of a decreased availability to screen proposed new publications, the number of submissions is increasing. How is that possible in a closed system? There are several reasons. One is the pressure from Universities, which need large numbers of publications by their researchers to improve their ranking. Another is the intrinsic connection of publication record of researchers with their perceived worth, where article and citation numbers are the unavoidable metrics. And then there are hacking attempts, now more and more common, when inconsequential papers are generated by AI or other automatic tools. Not to mention the fraudolent "paper mill" activities.To the above, of course one should add the push by publishers, who operate in a more and more competitive environment full of "predatory" journals. They will push their editors to accept higher fractions of publications, and to speed up the review cycle.

**My own experience**

I can also report here on my own experience as an author, as a reviewer, and as an editor of scientific publications. I have significant experience as an author - I have authored over 1800 articles (but I have not read them all!); I can also claim some experience as a reviewer - having produced probably around 100 external reviews plus 200-300 internal ones (inter-collaboration, pre-submission); and I am building some experience as an editor too - of five journals this far, one as editor in chief. So what can I add to the above?As an author, I feel that there are fewer and fewer good reviews of the works I submit. This is not surprising: reviewers are not subjected to an evaluation of their work, and when time pressured, they will reduce their effort. I see this happening also when I handle submissions as an editor: many reviews are so shallow that I need to ignore them. And as a reviewer, I have reduced significantly the number of papers I accept to read, for all the reasons already discussed (and on top of it, a little bit also because I am often bored by content-free articles I am asked to screen). Perhaps the most interesting thing I can report here is what happens under my hat as an editor. I am presently handling manuscripts submitted to four Elsevier journals. These are high-profile, respectable journals, but their reputation does not increase significantly the odds that a colleague will accept my requests for review. So what happens is that I find myself having to ask to 8, 10, or even 20 academics to get a single review back. This is frustrating, but it is worse than that. Because this blows out of proportions the amount of time I have to spend to organize the peer review of each submitted article. What happens is that instead of investing time on reading the article, I will just skim through its contents for obvious signs of crackpottery, and then massively rely on automated suggestions to identify the academics I request reviews from. Automated means? Yes - I am handling papers that are sometimes a far cry from my specific field of expertise, so I basically have no idea of who should be the scientists who could be best judges of the scientific validity of a manuscript. Fortunately there is a system that suggests reviewers based on their own publication record, and classifies them based on their reviewing activity history, country, institution, editorial roles, connections with the manuscript authors, etcetera. It is very helpful, but I am pushed to overuse it.What this means is that, by relying on automated means for picking the target of my reviewing requests, I am able to annoy a lot of people. By just ticking a few boxes I can send 10 emails at once, when I can expect I will net one or two reviews back. But it is easy to understand that this makes the scientists at the receiving end of these continuous review requests less and less sympathetic with the system...

**How to fix it?**

How to fix this compounded problem? Perhaps a few things might help. Here is a short list.

**Recognition and incentives**: Peer review needs to count — in CVs, evaluations, and funding metrics. Platforms that measure reviewing activities exist and may help, but more institutional support is needed.

**Transparency**: More use of open peer review (where possible) should be made, as this increases accountability and builds trust within the community.

**Editorial triage**: Journals should take more responsibility to filter obvious junk before it hits reviewers' desks. However, journals operate to maximize profit, and filtering is expensive. AI can help here, but the final responsibility should be left in the hands of humans.

**Cultural repair**: Perhaps what we most direly need to do is to revive the idea that reviewing is not just a chore but a scholarly act: part of the shared responsibility to keep the scientific record strong.

I think the above list (which I have used ChatGPT to help me compile) can be useful, but I have come to the conclusion that the only real response to a system based on the profit of private publishers is to migrate to a situation where reviewers are **paid for their time**. Maybe just 100$ per review would suffice. It would be sustainable, as it corresponds to less than 5% of the typical publishing costs.

"Paid peer review? Anathema!" I can hear some of you say. Yes, it sounds bad. But think about it: publishers are exploiting working hours of academics to increase their profits. The system is not working - reviews are less and less careful, and garbage gets published. If a part of the profits were given to reviewers, like what is already happening with editors, the tendency could be reversed in no time. What is the downside (if you are not a publisher, that is, otherwise the answer is trivial)? We could find out we need to avert a situation where editors would assign reviews to the same people (or even create a circle of "review mill"), but that would be easy to do, as reviews are commissioned within software interfaces that could prevent the concentration of reviews to the same scientists. We might also need to install a system of review evaluation, to incentivize meaningful reviews. It might be tricky, but it seems a much easier challenge than the one we are currently facing.As is the case with similar "non-adiabatic", "through the potential-barrier" transitions, such a change will not happen by itself - it can only be the result of a coherent push by the scientific community. I remain pessimistic... But I cannot help thinking that it would asymptotically be the correct solution to a problem that has become very serious. Time will tell!

Read full news in source page