This week is Peer Review Week, with a theme of quality in peer review, and we are sharing some experiences of peer review from people in the AuthorAID network. In this post, biologist-turned-development-practitioner Haseeb Md. Irfanullah discusses how peer review can be an opportunity for co-learning.
I still remember my first peer-review job. It was back in 2005. I was honoured and excited to be a reviewer of a world-class journal on botany. As a fresh PhD graduate, I considered it an amazing recognition of my scholarship. I can recall how much I enjoyed reading a brand new, unpublished research before anyone else (except the authors, of course!). And I was fascinated by the manuscript – a very interesting piece of research, very well written.
Since then, I have reviewed numerous manuscripts for many journals from a wide range of disciplines. The sentiment of 2005, however, changed over time. In the early days, I was super-thorough; I often edited the manuscript in addition to reviewing it! Gradually, I started focusing on the ‘research’ and presentation while reviewing. I limited myself to asking simple questions – what is the ‘new’ that the manuscript is talking about, is the research design logical and acceptable, and is the overall presentation reader-friendly and does it make sense?
Self-motivated volunteers like peer reviewers are great assets for our scholarly domain. But I want to call peer reviewing ‘professional volunteerism’. We must draw a line on how much time, energy, and expertise we should spend on it before it becomes overwhelming, self-exploitation.
Peer reviewers and quality
When we discuss peer-review process, and its quality for that matter, we often focus on the reviewers. From the editor’s or journal’s perspective, we talk about how difficult it is to get good reviewers, how they vary in terms of commitment and motivation. We praise reviewers’ altruism, them being part of a large scholarly community, where peers help each other out, often without even knowing each other’s names (I call it a fascinating example of 'good karma'!
We also talk about the downside of the peer-review system, full of over-burdened, over-exploited, under-appreciated reviewers. For journals we have impact factors and other metrics, for authors we have the h-index, for journal articles we have citations. But, for a key element of the publishing process, the reviewers, we do not have anything like that. Like their anonymity, a reviewer’s quality remains unmeasured or becomes part of the editor’s nightmare or pleasant dream.(The F3- index, however, has recently been proposed to identify and recognize efficient reviewers). Nevertheless, we do discuss, even if on a limited scale − how to build reviewers' capacity, offer them incentives, and attract new reviewers.
As far as peer-review quality is concerned, reviewers who do poor job can fall into four groups. The first group is honestly committed to the process, but are engaged with other priorities and responsibilities, thus often rush to return the manuscript with generic comments. The second group also shows genuine commitment to peer review, but do not know how to do it properly. There is a high possibility this second group even does not know they have done a bad job. The third group accepts review assignments as a scholarly norm, but do not see any value to give serious time and input to a manuscript. The fourth group creates a compromised peer-review system. When the expert pool of a discipline or of a country is so small that the authors and reviewers are the same people playing either role on different occasions, where scholarly criticism often not seen positively due to egoistical or cultural reasons, a closed, compromised system evolves and is sustained.
These four groups present different challenges. It is, however, difficult to work on each of these groups separately to improve peer-review quality. We need to see the challenge around reviewer quality from a different perspective.
Peer reviewers as co-learners
There is no doubt that good reviewers make manuscripts better. But I want to bring in another angle to this. I argue that good manuscripts can make reviewers better, in the same way that bad manuscripts demotivate reviewers. It is because peer review is essentially a co-learning process where both authors and reviewers work on the same document and can learn from their interactions, facilitated by the journal office, resulting in a better manuscript.
To improve peer-review quality, we therefore need to change the role and persona of reviewers − from assessor or advisor to ‘co-learner’. This would give the peer reviewers a new outlook, a new purpose. But to make the co-learning work, reviewers need to receive ‘good’ manuscripts, prepared following scholarly publishing standards and norms, to work on with the authors.
Here research institutions and scholarly journals have critical roles to play. Institutions need to invest in improving the quality of research and the research communication skills of their researchers so that they can produce good manuscripts. In this way, the burden of the advisory or educator’s roles that reviewers often play while reviewing manuscripts will be shared by the respective institutions.
Journals, on the other hand, need to efficiently sieve out weak manuscripts so that good standard ones reach the reviewers. This would reduce reviewers’ role as an assessor and give the reviewer more time to be a co-learner. These may seem obvious responsibilities of these two stakeholders. But institutions and journals often struggle to comply with these duties due to lack of resources, competing priorities, and inadequate understanding of the scholarly publishing system. This is particularly true in developing economies.
How to learn co-learning
Nevertheless, reviewers also have some responsibility to become a co-learner through the peer-review exercise. We learn to review by seeing how our professors and supervisors, even our reviewers, offer suggestions on our work. This approach has changed in recent years. There are now so many online resources (e.g. AuthorAID) and training opportunities (e.g. Publons Academy) available on peer review to get support from. But human involvement in learning is always useful.
Now, to co-learn with the authors, we need to open up our minds as reviewers. We need to be even ready to unlearn things we know. If research is a journey, a manuscript is a travel log. A review job helps us to learn from researchers’ journeys: What motivated them to start the journey, why they took certain paths, what they saw, why they interpret their observations in that way. We are reviewers because we have already made similar trips in the past. Our past journeys will help the authors to see things from a different perspective and, at the same time, will help us to make our future journeys better. As co-learning reviewers, we need to believe that we are not quality controllers appointed by journals or a sieve between the authors and their readers, but genuine peers, learning together from the authors’ research journey and will help to reach it to the readers, who would take the similar trips like ours.
When a paper is published in a journal, everybody involved gets their due credits by name: authors, editors, lab assistants, colleagues, and research funders. Everyone except the reviewers. Since the quality of our mainstream peer-review process essentially depends upon the reviewers, the proposed co-learning approach would reduce the undue burden on them, make reviewing a less stressful job, and make it a more positively engaging, learning process. This is the least we could do for these unnamed scholarly soldiers!
Dr. Haseeb Md. Irfanullah is a biologist-turned-development-practitioner with a keen interest in research and its communication. He is an independent consultant working on environment, climate change, and research system; a visiting research fellow of the Center for Sustainable Development (CSD) of the University of Liberal Arts Bangladesh; and a mentor on AuthorAID. Haseeb tweets as @hmirfanullah