Tuesday, December 24, 2013

The blinders of peer review

Does pre-publication peer-review isolate a finding from the field during the process? Sure, and that's partly the point of it, but it can lead to some inconveniences when two related papers from separate groups undergo peer review at the same time.

Earlier this year I published a bioinformatic analysis of the rhoptry kinases (ROPK), a lineage-specific family of signaling proteins involved in the invasion mechanisms of Toxoplasma gondii, Eimeria tenella and related eukaryotic parasites. During this study I found four T. gondii proteins (and their orthologs in other species) that have the hallmarks of ROPKs, including a predicted signal peptide, a protein kinase domain more similar to other ROPKs than to any other known kinases, and mRNA expression patterns matching those of other ROPKs. I named these genes numerically starting after the highest-numbered ROPK previously published (ROP46).

To informally reserve the names ahead of the publication of my own article, I posted notes on the corresponding ToxoDB gene pages: ROP47, ROP48, ROP49 and ROP50. My professor and I made some inquiries with other T. gondii researchers to see if it would be possible to confirm the localization of these proteins to the rhoptry organelle, in order to solidify our argument. Without a peer-reviewed publication to point to, though, this seemed to be the most we could do to promote the new gene names.

In parallel, another well-regarded lab that specializes in T. gondii rhoptry proteins, including but not limited to ROPKs, investigated the localization and function of three other proteins that mRNA expression had indicated were associated with other rhoptry proteins. It's great work. However, their paper and ours both passed through peer review at roughly the same time (earlier this year); we both followed the same numerical naming scheme for rhoptry proteins, starting after ROP46; and unfortunately, we ended up assigning the names ROP47 and ROP48 to different T. gondii proteins.

Crud.

How could this confusing situation have been avoided? EuPathDB is widely used, but it's not the primary source for gene names and accessions, and a user-submitted comment alone has fairly limited visibility. I presented a poster at the 2012 Molecular Parasitology Meeting, where many of the active Toxo enthusiasts gather each year, but the choice of new gene names was a minor detail on the poster. Heck, I even had breakfast with the other group's PI, but we only talked about curious features of established rhoptry proteins, not the novel ROPs we were each about to propose.

The only way to really claim a gene name is with a peer-reviewed publication.


* * *

Until now I didn't really grasp the importance of public preprint servers like arXiv, BioRxiv and PeerJ PrePrints — at least in the life sciences where a good article can be published outside a glamor mag within a few months. (In physics and mathematics, peer review and publication typically take much longer.) It was hard enough to get people I knew to review my articles before submitting them to a journal; would anyone really leave useful comments out of the blue if I posted an unreviewed paper on a preprint server? Answer: Maybe, but there's more to preprints than that.

"Competitors" have their own projects, usually planned around their own grants. They could drop everything and copy your idea if they saw it. More likely, they will do the same thing they'll do when they see your final published paper, which is to take this new information into account as they pursue their own projects. You do want to make an impact on the field, don't you?

Pre-publication peer-review is a well-established system for gathering detailed suggestions from uninvolved colleagues, a useful stick to force authors to improve their manuscripts, and sometimes a filter for junk. F1000 has an innovative process of publishing submissions first after a cursory screening, then collecting peer reviews and allowing authors to revise the manuscript at their leisure, apparently. Once a manuscript has been reviewed, revised and approved, it receives a tag indicating that it has been properly peer-reviewed. PeerJ takes a more conservative approach, hosting a preprint server alongside but separate from their peer-reviewed articles. Are either of these the way forward?

F1000 is new on the scene, and it may be too soon to tell if this is going to be a success. For one thing, will authors be motivated enough to correct their manuscripts promptly? PLoS One once fought a mighty battle against the perception that they weren't peer-reviewed. That stigma came out of thin air, and has been overcome — but will F1000 have to fight the same battle again, since their articles really are a mix of various states of peer-review? I hope not, because many scientists could benefit from having a few holes poked in the wall of pre-publication peer review.