Insights from marking a peer review assignment
As part of my course on writing and publishing, my postgraduate students do a workshop on conducting peer review (here), attend a lecture (here), and they are set a peer review assignment for their course mark. They are told to follow the workshop on a specific unpublished manuscript selected by me from bioRxiv. This gives them the opportunity to write, putting in place some of the writing skills that they are taught during the course, and lets them put into practice what they are taught about the spirit of peer review.
During the lecture on peer review, I emphasise how peer review should always be about helping authors to improve their manuscript. How to avoid making judgements and instead place an emphasis on critiquing the manuscript – including positive comments where they are warranted.
It should be stressed that the manuscript did not fall into the specialist subject of any of the students. Some of them may have had more insight into the molecular methods used than others, but for most of them their review fell firmly within science, and the biological sciences, but outside of anything familiar.
Before I start marking the assignments, I re-read the manuscript (having already read it once when I selected it) and carry out my own peer review exercise. This gives me familiarity with the manuscript and some ideas of the merits and problems therein. It also means that when I read the student reviews, I am able to assess them compared to my viewpoints and comments.
The points that are well addressed, common mistakes and insights that follow are illustrative of general problems when dealing with peer review:
Well addressed
Many students spotted some very minor typos and errors made by the authors. For example, a single missing % sign was spotted by about a third of students, and those same students also saw that one figure was incorrectly referred to in the text. Similarly, a poorly labelled figure and colours that were too close together to distinguish were also regularly picked up.
Some students noticed that the authors failed to discuss any caveats relating to their study in the discussion. This had been a part of the coursework on writing a discussion, which is gratifying. However, no students noticed the failure of the authors to suggest new fruitful directions for their research.
Common mistakes
When summing up the manuscript contents, many of the students repeated a claim of novelty made by the authors even though this was false. This was interesting as there was no real need to repeat the claim, but the prominent claim of novelty in the abstract and the introduction was clearly very attractive and picked out by many students in the class to be repeated. I couldn’t have expected the students to know whether or not the claim was correct (it wasn’t), but very interesting that they were prepared to repeat it even though the claim was not substantiated and they had no knowledge to contribute themselves. This may go partway to explaining why claims of novelty are regularly published in many journals even though they are false. This is more common in high-ranking journals where editors are looking for novelty.
Although the manuscript clearly stated that it aimed to study one species, most students suggested that the study should be widened to increase the number of species studied. This mistake in review, expecting authors to go beyond the scope of their stated aims, was common in the class last year. The importance of the aims of a study was repeated continually throughout the writing course, yet students failed to recognise how the stated aims in a manuscript set the bounds of the manuscript contents. No students suggested that the idea of widening the experiment to further taxa, and which taxa to prioritise, should be a discussion point.
Students called for more citations to be made in the introduction and discussion. They did this without any making reference to specific statements that lacked citations.
Insights
Many of the students asked for experiments to be repeated, often adding that more replicates must be made. This request came regardless of the results or size effect, but they regularly drew attention to a lack of significance and high variance. Many implied a Type II error adding that no reporting should be made unless significant results could be achieved by repeating experiments. This was also interesting as a good part of the course stressed the importance of transparency and the realities of publication bias. Nonetheless, the idea of rejecting publication based on data that did not show significance was very strong, suggesting that this is a deeply held belief and not one that can be swayed by a course that stresses how this approach is bad science.
Students often requested experiments conducted in the field part of the study to be repeated with more controls on fluctuating environmental factors, although never stating which factors they wanted to be controlled. This lack of insight into the difficulty of field experimentation from a class of biology postgraduates was particularly disappointing. This may be because few of the students have sufficient experience in conducting field experiments. Nonetheless, the lack of empathy for authors who successfully carried out a field experiment and presented the results, limited though they may be, was surprising.
I had asked students to draw attention to the good in the manuscript for praise as well as negative aspects. However, those who wrote much of their review as praise did so without drawing attention to any specific points in the manuscript. Much in the same way that other students were highly critical without being able to say why. I think that this “arm-waving” approach to peer review is indicative of someone who really doesn’t understand what they are reading but feels the need to write something (in this case because they had to for an assignment). Again, this felt familiar to many reviews received where reviewers try to set a tone of their impression, but then fail to find any specifics that can back up that feeling.
Although these students were not experts on the contents of the manuscript, and are unlikely ever to be called upon to review something similar, many of the comments that they made felt familiar and could even be considered generic of a bad review. This despite being coached immediately prior to the assignment in how to conduct a good review.
In conclusion
I think post-graduate students conducting peer review is a great learning exercise, but that it is also very insightful for me when reading what they decide to pull out. Peer review is difficult. It appears to bring out prejudices even when we know that these should be suppressed. These insights should be useful for editors when assessing manuscripts on the basis of peer review.