Peer review is f***ed up – let’s fix it

Peer review is ostensibly one of the central pillars of modern science. A paper is not taken seriously by other scientists unless it is published in a “peer reviewed” journal. Jobs, grants and tenure are parceled out, in no small part, on the basis of lists of “peer reviewed” papers. The public has been trained to accept as established truth any science that has gone through the gauntlet of “peer review”. And any attempt to upend, reform or even tinker with it is regarded as an apostasy.

But the truth is that peer review as practiced in the 21st century biomedical research poisons science. It is conservative, cumbersome, capricious and intrusive. It slows down the communication of new ideas and discoveries, while failing to accomplish most of what it purports to do. And, worst of all, the mythical veneer of peer review has created the perception that a handful of journals stand as gatekeepers of success in science, ceding undue power to them, and thereby stifling innovation in scientific communication.

This has to stop. In honor of Open Access Week, I am going to lay out what is wrong with peer review, how its persistence in its current form harms science, scientists and the public, and how we can restructure peer review to everyone’s benefit. [These ideas have emerged from over a decades worth of conspiring on this topic with Pat Brown, as well as myriad discussions with Harold Varmus, David Lipman, Vitek Tracz, my brother Jonathan, Gerry Rubin, Sean Eddy, other board members and staff at PLoS, and various and sundry people at meeting bars].

Peer review and its problems

To understand what’s wrong with peer review, you have to understand at least the basics of how it works. When a scientist has a result they want to share with their colleagues they write a paper and submit it to one of nearly 10,000 biomedical research journals.

The choice of journal is governed by many factors, but most scientists try to get their papers into the highest profile journal that covers their field and will accept it. Authors with the highest aspirations for their work send it to one of the wide circulation general science journals Science and Nature, or to a handful of high impact field-specific journals. In my field, molecular genetics/genomics, this would be Cell and PLoS Biology (a journal we started in 2003 to provide an open access alterative to these other three). In more clinical fields this would be something like the New England Journal of Medicine. [I want to make it clear that I am not endorsing these choices, just describing what people do].

When any of these top-tier journals receive a paper, it is evaluated by a professional editor (usually a Ph.D. scientist) who makes an initial judgment as to its suitability for their journal. They’re not trying to determine if the paper is technically sound – they are trying to figure out if the work described represents a sufficiently significant advance to warrant one of the coveted spots in their journal. If they think it might, they send the paper to 3 or 4 scientists – usually, but not always lab heads – who are knowledgeable about the subject at hand, and ask them to read and comment on the manuscript.

The reviewers are asked to comment on several things:

  • The technical merits of the paper: are the methods sounds, the experiments reproducible, the data believable, the proper controls included, the conclusions justified – that is, is it a valid work of science.
  • The presentation: is the writing understandable, are the figures clear, is relevant earlier work properly cited.
  • Are the results and conclusions of the paper sufficiently important for the journal for which it is being reviewed.

For most journals, the reviewers address these questions in a freeform review, which they send to the editor, who weighs their various comments to arrive at a decision. Reviews come in essentially three flavors: Outright acceptance (rare), outright rejection (common for high tier journals), and rejection with the option to address the reviewers’ objections and resubmit. Often the editors and reviewers demand a series of additional experiments that might lead them to accept an otherwise unacceptable paper. Papers that are rejected have to go through the process over again at another journal.

There are too many things that are wrong with this process, but I want to focus on two here:

1) The process takes a really long time. In my experience, the first round of reviews rarely takes less than a month, and often take a lot longer, with papers sitting on reviewers’ desks the primary rate-limiting step. But even more time consuming is what happens after the initial round of review, when papers have to be rewritten, often with new data collected and analyses done. For typical papers from my lab it takes 6 to 9 months from initial submission to publication.

The scientific enterprise is all about building on the results of others – but this can’t be done if the results of others are languishing in the hands of reviewers, or suffering through multiple rounds of peer review. There can be little doubt that this delay slows down scientific discovery and the introduction to the public of new ways to diagnose and treat disease [this is something Pat Brown and I have talked about trying to quantify, but I don't have anything yet].

Of course this might be worth it if this manifestation of peer review were an essential part of the scientific enterprise that somehow made the ultimate product better, in spite of – of even because of – the delays. But this leads to:

2) The system is not very good at what it purports to do. The values that people primarily ascribe to peer review are maintaining the integrity of the scientific literature by preventing the publication of flawed science; filtering of the mass of papers into to identify those one should read; and providing a system for evaluating the contribution of individual scientists for hiring, funding and promotion. But it doesn’t actually do any of these things effectively.

The kind of flawed science that people are most worried about are deceptive or fraudulent papers, especially those dealing with clinical topics. And while I am sure that some egregious papers are prevented from being published by peer review, the reality is that with 10,000 or so journals out there, most papers that are not obviously flawed will ultimately get published if the authors are sufficiently persistent. The peer reviewed literature is filled with all manner of crappy papers – especially in more clinical fields. And even the supposedly more rigorous standards of the elite journals fail to prevent flawed papers from being published (witness the recent Arsenic paper published by Science). So, while it might be a nice idea to imagine peer review as some kind of defender of scientific integrity – it isn’t.

And even if you believed that peer review could do this – several aspects of the current system make it more difficult. First, the focus on the importance of a paper in the publishing decision often deemphasizes technical issues. And, more importantly, the current system relies on three reviewers judging the technical merits of a paper under a fairly strict time constraint – conditions that are not ideally suited to recognize anything but the most obvious flaws. In my experience the most important technical flaws are uncovered after papers are published. And yet, because we have a system that places so much emphasis on where a paper is published, we have no effective way to annotate previously published papers that turn out to be wrong: once a Nature paper, always a Nature paper.

And as for classification, does anyone really think that assigning every paper to one journal, organized in a loose and chaotic hierarchy of topics and importance, is really the best way to help people browse the literature? It made some sense when journals had to be printed and mailed – but with virtually all dissemination of the literature now done electronically, this system no longer makes any sense whatsoever. While some people still read journals cover to cover – most people now find papers by searching for them in PubMed, Google Scholar or the equivalent. While the classification into journals has some value, it certainly doesn’t justify the delays in publication that it currently requires.

I could go on about the problems with our current peer review system, but I’m 1,500 words into this thing and I want to stop kvetching about the problem and get to the solution.

The way forward: decoupling publication and assessment

Despite the impression I may have left in the previous section, I am not opposed to the entire concept of peer review. I think there is tremendous value generated when scientists read their colleagues papers, and I think science needs efficient and effective ways to capture and utilize this information. We could do this without the absurd time-wasting and frivolity of the current system, by decoupling publication from assessment.

The outlines of the system are simple. Papers are submitted to a journal and assigned to an editor. They make an initial judgment of the suitability of the paper – rejecting things that manifestly do not belong in the scientific literature. If it passes this initial screen, the paper is sent out to peer reviewers (with the authors given the option of having their paper published immediately in a preliminary form).

Reviewers are given two separate tasks. First, to assess the technical validity of the paper, commenting on any areas where it falls short. Second, and completely independently, they are asked to judge the importance of the paper in several dimensions (methodological innovation, conceptual advance, significant discovery, etc…) and to determine who should be interested in the paper (all biologists; geneticists; Drosophila developmental biologists, etc….). This assessment of importance and audience would be recorded in a highly structured (and therefore searchable and computable) way – and would, in its simplest manifestation, amount to reviewers saying “this paper is good enough to have been published in Nature” or “this is a typical Genetics paper”.

The reviews would go back to the editor (whose main job would be to resolve any disagreement among the reviewers about the technical merits of the paper, and perhaps lead a discussion of its importance), who would pass on the decision to or not to publish (here based entirely on the technical merits) on to the authors along with the reviewers structured assessment of importance and any comments they may have. If the technical review was positive, and the authors were happy with the assessment of importance and audience, they could have it published immediately. Or they could choose to modify the paper according to the reviewer’s comments and seek a different verdict.

This system – pieces of which are already implemented in PLoS One and its mimics – has several immediate and obvious advantages.

First, it would be much faster. Most papers would only go through a single round of review after which they would be published. No ping-ponging from one journal to another. And this dramatic increase in speed of publication would not come at the price of assessment – afterall, main result of the existing peer review system, the journal in which a paper is published, is really just an assessment of the likely importance and audience for a paper – which is exactly the decision reviewers would make in the new system.

Second, by replacing the current journal hierarchy with a structured classification of research areas and levels of interest, this new system would undermine the generally poisonous “winner take all” attitude associated with publication in Science, Nature and their ilk. This new system for encoding the likely impact of a paper at the time of publication could easily replace the existing system (journal titles).

Third, by devaluing assessment made at the time of publication, this new system might facilitate the development of a robust system of post publication in peer review in which individuals or groups would submit their own assessments of papers at any point after they were published. These assessments could be reviewed by an editor or not, depending on what type of validation readers and other users of this assessments want. One could imagine editorial boards that select editors with good judgment and select their own readers to assess papers in the field, the results of which would bear the board’s imprimateur.

Finally, this system would be extremely easy to create. We already have journals (PLoS One is the biggest)  that make publication decisions purely on their technical merits. We need to put some more thought into exactly what the structured review form would look like, what types of questions it would ask, and how we would record and transmit it. But once we do this, such a system would be relatively easy to build. We are moving towards such a system at PLoS One and PLoS Currents, and I’m optimistic that it will be built at PLoS. And with your ideas and support, we can – with remarkably little pain – fix peer review.

[Update] This is not just a problem with elite journals

In the comments Drug Monkey suggests that this problem is restricted to “Glamour Mags” like Science and Nature. While they are particularly bad practicer of the dark art, virtually all existing journals impose a significance standard on their submissions and end up rejecting a large number of technically sound papers because they are not deemed by the reviewers and editors to be important enough for their journals. All of the statistics that I’ve seen show that most of the mainstream society journals reject a majority of the papers submitted to them – most, I would bet, because they do not meet the journal’s standards for significance. In my experience as an author of almost a hundred papers and reviewer and editor for many more, reviewers view their primary job to determine the significance of a paper, and often prioritize this over assessing its technical merits. One of the funny (i.e. tragic) things I’ve noticed is that reviewers don’t actually modify their behavior very much when they review for different journals – they have one way of reviewing papers, and they do basically the same thing for every journal. Indeed, I’ve had the absurd experience of getting reviews from PLoS One – a journal that explicitly tells reviewers only to assess technical merits – that said that the paper was technically sound, but did not rise to the significance of PLoS One.



This entry was posted in baseball, open access, PLoS, publishing, science. Bookmark the permalink. Post a comment or leave a trackback: Trackback URL.

38 Comments

  1. DrugMonkey
    Posted October 28, 2011 at 7:37 am | Permalink

    I don’t disagree with your analysis, however you are well off the rails in the targeting. Your complaint is with *GlamourMag Science*. Not with peer review per se. You should be clear about that. In fact your very conflation of the GlamourGame with “peer review” puts you straight in front of your own gunsights.

  2. Michael Eisen
    Posted October 28, 2011 at 7:45 am | Permalink

    The same thing happens at all levels – virtually every journal I’ve submitted to or reviewed for has some standard of significance – so much so that the idea of peer review in most reviewers minds inherently features the question of significance over rigor. I’ve even seen things reviewed at PLoS One – a journal whose explicit policy is to accept all technically sound papers – where the reviewers say “This is a perfectly fine paper, but it’s not a PLoS One paper”.

  3. DrugMonkey
    Posted October 28, 2011 at 8:20 am | Permalink

    Three observations-

    IME, editors for real journals are way more active in shutting down these kinds of reviewer observations.

    Once you are in the lower IF bands, there are so many “lateral” options that your chances of getting it in somewhere of similar rank are pretty good.

    The costs of dropping down from 4-6 to 2-3 range are not that great. So even if you have to “dump” it, the relative impact on the scientists is lower. Makes it easier to blow off requests for a whole R01′s worth of data.

  4. Posted October 28, 2011 at 8:28 am | Permalink

    This is OK as far as it goes, but IMO it doesn’t go nearly far enough. (You clearly did not spend enough time in meeting bars with me!) The best part is the idea of publishing reviews as structured metadata. It’s also good to encourage reviewers to separate technical correctness from impact, as PLoS One does. However, the rest is (if I understand it correctly) essentially just the same-old same-old “3 reviewer” system, but allowing an option for pre-review publication. What we urgently need is a transparent, open market in post-publication peer reviews, that temporally decouples publication and review. IOW, scientific publication needs to be more like web publishing and less like print publishing. This is not going to happen with an incremental modification of 20th century peer review models. It needs to be more like web 2.0 (slashdot, digg, reddit, etc) but with firm incentives for providing reviews. You say that your proposal “might facilitate the development of a robust system of post publication in peer review” but as far as I can see, it does not specifically address this point at all. Physics has already mostly moved to a system of (a) pre-publication, (b) informal review by multiple reviewers, and eventually (c) journal publication, with the initial pre-publication typically happening on arXiv. This drift is gradually happening in biomed too. The missing piece, however, is the “honor system” that scientists should review 3 papers for every one they publish (with suitable corrections for multi-author papers). Currently there is no “price signaling” mechanism for verifying such voluntary efforts. Without this, we cannot move beyond the current archaic system where an editor has a deadline to trawl his/her social network for 3 willing reviewers. One reason papers hang in the review process for a long time is because reviewers can be hard to find. But if an author has contributed 3 reviews into the system, they deserve to get 3 reviews back out. It would be eminently possible for publishers to enforce this with a public points system, but none do (to my knowledge). My beef is that this is simply an incremental modification of journal peer review. I suppose such incremental changes may be easier to implement, but IMO the future looks more like this: http://hypothes.is/

  5. Posted October 28, 2011 at 8:29 am | Permalink

    This system assumes that there is in fact a robust mechanism for post-publication review. Ironically, although scientists spend much of their time in front of computer screens, there is an unwillingness for people to go on the record and critique published papers. On the other hand, articles about politics, technology, and every other topic under the sun can start a raucous discussion in minutes on various online forums, some of which include contributions by the same scientists who are silent online about their area of interest!
    My feeling is that building this online community of scientific discussion would greatly improve the quality of science. Once this community is in place and accepted by scientists as being reliable and important – the relevant arbiter – the monopoly the journals have will evaporate. Fortunately, building this community would be independent from building a journal. It can co-exist with the exist system until it is ready to replace it.
    The key issue to deal with, however, is how can people feel comfortable honestly and openly comment on others’ work without concern about retribution – intentional or not – in grant panels etc.

  6. Michael Eisen
    Posted October 28, 2011 at 8:36 am | Permalink

    @Ian – I should have been more clear (I was writing under the influence of the World Series). The system I proposed (which is what Pat and I have been trying to get implemented at PLoS for a decade) is purely transitional. The idea is to undermine the link between publication and assessment first – removing the major obstacle to building a robust and continuous system of peer assessment – which is the unreasonably weight placed on the title of the journal in which something initially appears.

    The system I would like to ultimately see is one in which papers are published as soon as the authors are ready, and they accumulate assessment over time. That is, we should do what physicists have been doing for 25 years.

  7. Posted October 28, 2011 at 8:49 am | Permalink

    Great – then we are in accord – though I still think we should spend more time in conference bars (if only as a general principle). From the point of view of “what can PLoS do to facilitate the transition to continuous assessment?” it does make sense. I think the fundamental transition is that PLoS et al need to shift from being “publishers of papers” to “publishers of reviews”.

    One further suggestion would be to explicitly “gamify” the process of providing reviews. Reviewers could opt to have their review stats published (“XXX has contributed N reviews”), there could be extra points for being the first reviewer on a paper, readers could up-vote reviews (via “Like” or “Thumbs-up” buttons), and so on.

  8. Posted October 28, 2011 at 9:40 am | Permalink

    Just a quick +1 for Ian’s idea about providing credit for reviews. There is currently no formal mechanism, and very little in the way of informal means, for getting credit (money, tenure, etc) for being a good reviewer, or even for being willing to review. It’s actually a waste of time, career-wise, to take it seriously and put real effort into the job (which might explain why so many seem to do the opposite, but that’s a different rant).

    It’s an important part of the job, and the career structure should acknowledge that.

  9. Posted October 28, 2011 at 9:47 am | Permalink

    @DrugMonkey

    Moving laterally from one journal to another can still delay publication by months. The paper has to go through the initial peer review at each journal, and generally it seems editors at lower IF journals have a harder time recruiting reviewers and/or putting pressure on them to deliver their reviews in a timely fashion.

  10. Posted October 28, 2011 at 10:27 am | Permalink

    PLoS Biology and PLoS Genetics are as egregiously bad at this as Science, Cell, Nature, and the like: “We have made the editorial decision that the findings presented in the submitted manuscript do not rise to the level of significant scientific advance we consider necessary for publication…”

  11. Abdallah Al-Hakim
    Posted October 28, 2011 at 10:32 am | Permalink

    In my opinion, it might be important to have paid professional scientist who review the papers and submit a response in a pre-determined time frame (maybe two weeks). One of the problems of the current system is the volunteer nature of the review system which in many cases lead to papers sitting at a principal investigators desk for many weeks before it is even looked at. However, if a paid professional service is provided then one authors of a submitted paper can demand to have reviews back in a pre-agreed time frame. Also, this option could potentially employ post-doctoral researchers on a full time or part time basis and would provide a much need supplementary income for many of them.

  12. Posted October 28, 2011 at 11:29 am | Permalink

    One difficulty : how can the readers sort out between zillions of papers published in his field ? One advantage of the current system is that, even though there are a lot of crap published in top tier journals, you can still certainly get a good idea of the directions of the field and read a good chunk of the best papers published. If there is only open access with no hierarchy of the papers, you might get totally lost (and you might also lose some serendipity you get when reading generalist journals like Nature or Science). arXiV is interesting from that standpoint : there are a lot of crackpot submitting papers there, and I am not that sure that the signal to noise ratio is any better.

  13. JJ
    Posted October 28, 2011 at 11:56 am | Permalink

    The all problem resides in your friends and ennemies. If you can like reviews, papers, etc…. Then your friends will do it for your papers… If it is anonymous, then your ennemies will trash your papers…. Look at F1000, most of the reviews are made by friends of the authors. It is rarely independent

    Credit for reviews would be great. Especially for young researchers but an editor needs to be there to guarantee independent reviewers

  14. Michael Eisen
    Posted October 28, 2011 at 12:13 pm | Permalink

    @ComradePhysioProf: I agree completely. I think PLoS Genetics does an outstanding job, but they still reject far more papers than they accept. Many of these get referred directly to PLoS One, with the reviews passed on, but it’s still an inefficient way of doing things that ultimately has to change. I still see a role for editorial boards to act both to coordinate reviewers in a field and to select papers they feel are worthy of special recognition and are of high interest in their field. But that decision should be unlinked to the primary decision of whether to publish.

  15. Jim Woodgett
    Posted October 28, 2011 at 1:00 pm | Permalink

    There are simply too many papers published. I am not sure that PLoS One et al. are helping in this respect. Clearly, with 10,000 mostly ignored journals, many scientists are content with just adding to their CV. There needs to be more than technical soundness. Perhaps limiting the number of papers that one can submit per year would help? Whatever the solution(s), there needs to be better evaluation of significance in order to measure research effectiveness.

  16. Posted October 28, 2011 at 1:43 pm | Permalink

    It would be interesting to know if those complaining about how long it takes to get their papers reviewed are significantly faster at turning around reviews themselves. Just curious…

  17. Posted October 28, 2011 at 2:29 pm | Permalink

    “Clearly, with 10,000 mostly ignored journals…Perhaps limiting the number of papers that one can submit per year…there needs to be better evaluation of significance in order to measure research effectiveness.”

    You really are not serious are you? This is a troll job right?

    • Michael Eisen
      Posted October 28, 2011 at 2:41 pm | Permalink

      Sounds like a European grants administrator.

  18. Posted October 28, 2011 at 3:14 pm | Permalink

    Or a Republican presidential candidate

  19. Posted October 28, 2011 at 7:05 pm | Permalink

    @Ian Holmes “It needs to be more like web 2.0 (slashdot, digg, reddit, etc) but with firm incentives for providing reviews.”

    Having had the experience of my research being discussed on slashdot, I can confidently say I have no desire to see this come about. And frankly, very little should aspire to being more like reddit.

  20. Posted October 28, 2011 at 7:11 pm | Permalink

    Some empirical evidence that the publication process slows down science: papers that are posted on arxiv and are later published in a journal get 20% as many citations before being published as they do in the first 2 years after publication.
    http://arxiv.org/vc/arxiv/papers/0906/0906.5418v1.pdf

    Also, I think it already doesn’t matter so much where papers are published but biologists haven’t changed their behavior yet. I speculate that tough times for funding make it seem risky to change.

    • Posted July 27, 2013 at 9:40 pm | Permalink

      Another pernicious aspect of some universities is the pressure to submit to journals which are on the SCI ‘list’ or perceived as ‘prestigious’, even though the subject matter of the paper is far more closely aligned with a different, but ‘lower ranked’ journal. In my field, only three of about 22 relevant journals are on Reuter-Thomson lists. As a consequence, these three journals get hammered by [pick one] (a) a tsunami of poorly-written inconsequential papers, or (b) a raft of papers which don’t actually fit the published scope of those journals. The lead time to publication has now stretched out to three years, The academic bureaucracy of my institution is deaf to my counterarguments, insisting that we must publish in these ‘A*’ journals to the exclusion of others [Australia still has such a list, dating from 2010], or be damned into eternity. The tail is wagging the dog. I’m not a dog, and I ignore it, which is why I had twelve articles published in good journals last year, and already six this year, whereas my poorly-advised younger colleagues are still revisiting work they did four years ago with no certainty of publication even then. *I* know their results are both worthwhile and publishable, but instead of doing new stuff they are recycling the old and worshipping at the altar of a false and capricious god.
      Radikah Nagpal at Harvard recently wrote of the stupid and capricious advice he received about tenure-track [which she ignored to her benefit], and it is equally relevant to the ‘where shall I publish’ calumny that I hear on a daily basis.

  21. Guy
    Posted October 28, 2011 at 8:29 pm | Permalink

    A major problem as you and Tom Roud mentioned is that there are too many “crappy papers”, a brief look at Plos one would tell you that it did not solve this problem.
    The main problems I see are bad statistics and graphs that are misleading.
    Here is an idea, an independent professional company , not involved with specific journal, will evaluate and rank the papers. And journals will bid for the papers.

  22. Ian Holmes
    Posted October 29, 2011 at 5:34 pm | Permalink

    @Confounding: I mean the type of software engine (Slash, Scoop etc) needs to be similar, rather than mimicking the specific tone of various pop cultural blogs or sites, and the StackExchange family (StackOverflow etc) is probably a better example anyway.

  23. Posted October 29, 2011 at 6:07 pm | Permalink

    I absolutely hate the “not significant for journal X” response. I work in biological sciences primarily concerned with recording natural history information (e.g. taxonomy, biogeography, etc.). Many times, a short note on distribution, morphology, or behavior makes significant progress in the overall understanding of a species or group. Publishing these scientific notes is an important task, despite their length and narrow subject matter, because ideographic sciences such as taxonomy slowly accumulate information over time which is used as the basis for hypotheses in the experimental sciences.

    Recently, I submitted one of these short notes to a journal that publishes such manuscripts. I did not expect prestige or recognition for the piece, in fact, I expected it would be a huge hassle and it was worth doing otherwise. Much to my surprise, and despite the explicit purvew of the journal, I was told by one of the reviewers that it was “barely worthy of publication”. This was very discouraging! I had put time and effort into this, and submitted it to a journal which publishes such things, and it was seen as unworthy.

    This concerns me. If this sort of attitude is prevalent (which I suspect it is), there is a great deal of basic, important information on species that is never published and therefore never available to researchers. If natural history works on slow accumulation, peer review is cutting off that process due to overzealous rejection. So I agree with your assessment.

  24. Posted October 30, 2011 at 1:12 pm | Permalink

    Thank you for addressing the comment from Woodgett above… thereby proving that a scientific community CAN self-regulate and critique openly – if not professionally ;-) But bluntly, “there are too many papers…” is an opinion that can hardly be taken seriously.

    Can anyone answer me this: Why is there no web2.0 functionality on Pubmed? Why can’t we carry on a discussion w/i the Pubmed ‘community’ that allows for the post-publication review to be dynamic?

    I have no problem selling Phillies tickets on stub hub or purchasing golf supplies on eBay b/c the community governs itself.

    Some may argue that science is too important to be gamified – I would argue that it is too important not to be…

    Thanks,

    Brian

  25. Hilmar Lapp
    Posted October 31, 2011 at 11:44 am | Permalink

    Michael – an experience I’ve made numerous times is that aside from vetting the science, peer-review and editors oftentimes make papers substantially better, simply in their matter of presentation. Perhaps this is a result of the art of writing being increasingly lost. But regardless of cause, I bet that you share the experience that as a matter of fact many manuscripts at the time of first submission are far from the succinct well-reasoned exhibit of a scientific undertaking that they should be in order to receive the uptake and reuse they possibly deserve. I’m fully with you on decoupling peer-assessment from publication, but I think that to be truly effective this requires a publishing system and culture in which publications are no longer static records once “accepted for publication”, but can continue to change afterwards as a result of and in response to vetting and reader comments. What if a journal were a wiki, with reviewer assessments being on the “Talk” pages associated with every article, and all historical versions archived and identifiable.

  26. Posted October 31, 2011 at 2:28 pm | Permalink

    With over a million scientific publications from thousands of scientific journals world-wide, it hard to envision “a handful of journals that are considered the “gatekeepers of success in science.” Many of the least informative and short reviews that I have received over the last 25 years have been from “premier” journals.

    I remember receiving a rejection from the journal Cell back in 1991 about a submitted manuscript produced in collaboration with Jonathan Cooper’s laboratory in which we demonstrated that the 42 kDa protein that undergoes enhanced tyrosine phosphorylation in growth factor-treated mammalian cells and in frog oocytes undergoing meiotic maturation were enzymatically and immunologically similar and corresponded to a MAP kinase. We also provided the first full length amino acid sequence in any species of what later became to be called ERK2. The primary basis of the rejection from Cell was that MAP kinases had not been shown to be important for anything. Interestingly, about 4 months later, Cell published a manuscript from Melanie Cobb’s laboratory that featured amino acid sequences for ERK2 and ERK3 as well as an incomplete sequence for ERK1 with no functional data.

    I am sure that most established and successful researchers have endured similar experiences many times in their careers. The present peer-review system is highly flawed and places an immense burden on the scientific community. If the scientific content of a scientific manuscript was the true motivating criteria for publication, then not only should the reviewers be anonymous to the authors, but so should the authors of the manuscript be blinded to the reviewers. In any event, manuscripts are often reviewed by those that do not have the time nor specialized knowledge to properly provide a critique. The concept that a scientific paper should be pre-reviewed by a full-time journal editor is even more disturbing. Individuals in these positions often have much less actual research experience and are probably less informed about advancements in specialized fields. As it stands now, peer-review cannot easily identify fraudulent data. It can flag sloppy methodology and mis- or over-interpretation of results.

    In the end, some form of peer-review prior to publication is highly advisable. In particular to avoid ultimate embarrassment to the authors and the journal in which the work appears. However, the most effective peer-review should be post-publication when other experts in the field are able to critique the work in a constructive fashion. Few journals, PLOS being an exception, provide such an opportunity for direct feedback from the scientific community.

  27. Yotam Drier
    Posted November 4, 2011 at 7:04 am | Permalink

    I generally agree, but it’s unfair considering rewriting, adding data and analysis as a waste of time. It usually does make the paper better. The big time waste is waiting for the reviewers, going from one paper to the other and getting unrelevant, wrong reviews. Therefore I think it would be best if the suggested web 2.0 publication site will post every paper that passes editorial-level quick filter, and than any registered reviewer (other than the authors) can asses significance and post reviews (anonymously?). The reviewer might accept the reviews and submit a corrected version, or ignore them, risking a low assessment. All the versions of the paper will be archived and accessible. That way there will be no delays in publication, and minimum wait for reviewers (as anyone can review whenever he wants, and its not assigned to reviewer currently unavailable waiting on their desk – and with the credit for reviewers suggest above, many will have incentives to review). Also to promote high quality reviews, I suggest displaying some statistics of the quality of reviews, say total number of reviews, and how many of them were ignored by the authors (on a homepage everyone will have on the site with link to all publications, and other cool features). On the other hand this will still achieve the current contribution of reviews to paper quality Hilmar was referring to, and the benefits of significance / relevant audiences annotation for the readers.
    It would be helpful for the long run if whoever runs this endevor will not be a private company with private interests, but an organization owned and led by the scientific community it serves (say, via elected management and board of governors), so instead of suggesting ideas of how to improve stuff in blogs, such ideas could be brought forward in some sort of an annual (virtual?) assembly and be voted on, allowing it to evolve according to relevant interests.

  28. Posted November 14, 2011 at 10:07 pm | Permalink

    After having this open in a browser tab for a few weeks I finally got a chance to go through all 2,500 words. One advantage of the delay is getting to see some prescient comments.

    I think this sort of post-publication review system is easily done (technologically) but I remain unconvinced most researchers really want change. I could make it – not only could I make it, I could eliminate charges for publication and even pay people to publish their studies, but people do not really want a different system (yet). As you once wrote about the difficulty starting PLoS, people love something after it is successful but getting people to buck an existing system, in this case to create a Peer Review 2.0, is tough because their livelihoods are tied up in the old system.

  29. Adam Glickman
    Posted November 17, 2011 at 1:12 pm | Permalink

    I think this problem is largely the result of an antiquated system trying to cope with too much volume. When you had a few thousand scientists and could literally know everyone in your discipline, then you could be fairly confident of getting a careful and thoughtful review of your work. Partly because of reciprocity, you wouldn’t want to do a bad job because the odds were fairly high that this person would be reviewing your work at some point.

    Now with so much information it is simply impossible to keep up and a new system is required. While PLoS is the new kid on the block at the moment with better functionality it will eventually overtake those more established journals if they don’t learn to adapt.

    On a somewhat related note I thought you might find this project interesting since it is trying to bring crowd-sourced peer review to the internet at large:
    http://hypothes.is/index.html

  30. Posted January 14, 2012 at 7:24 am | Permalink

    Very interesting article; I have been thinking on the same lines for a while now and have wondered about the idea of decoupling reviewing from publication.

    One other major problem I see with the current peer-review process is that good peer reviews take time. I estimate a good peer review takes me at least a morning or afternoon’s worth of work to justify sending my thoughts back to the authors. Maybe I am slow, but I don’t think so. Thoughtful peer reviews generally lead you to be asked by the journal again – I’ve had two or three instances of submitting a review to a journal only for a fresh manuscript to arrive in my inbox within a matter of minutes. This chain-reviewing policy takes advantage of what I suspect is a subset of academics who see thoughtful peer reviewing as part and parcel of their academic duties; however, it is draining on one’s own research time (this week I have been asked by 3 separate journals to do 3 peer reviews, on top of two outstanding – that’s half of my academic week gone). It’s also demoralising, especially if other peer reviews of a given manuscript have written drab, one-paragraph (or shorter) reviews which sometimes clearly indicate a lack of attention by the reviewer (of course, in rare instances one paragraph responses my be justified if the paper is clearly acceptable or majorly flawed, but these instances are rare). These reviewers are not asked to review again, placing yet more burden on an ever-diminishing pool of reviewers who become yet more demoralised.

    The system clearly needs changing. One way to speed up the process, along with the changes you outline, is to place greater emphasis on educating trainee academics (perhaps at undergraduate level, but certainly at Masters level and beyond) of the importance to conduct and contribute to the peer review process in an open, honest and timely way. Academics shouldn’t need extra incentives to conduct peer reviews, the incentives of reciprocity and ultimately better science should suffice, but this message does need instilling in academics from an early point in time.

  31. ALY
    Posted February 15, 2012 at 10:31 am | Permalink

    Why don’t the other PLoS journals (PLoS Biology, PLoS Genetics, etc.) also review papers only on the basis of technical merit, like PLoS One does?

  32. David Friedman
    Posted September 4, 2012 at 9:06 am | Permalink

    I think it is also important that whatever system is adopted should exert a constructive and generally supportive influence on those who are new to the profession and to the experience of publishing. This would, among other things, in the end improve the quality of submitted papers and encourage clarity and comprehensibility to the reader.

    BTW, the problem with peer review is not unique to the biosciences; I have had similar experiences in the past with manuscripts submitted to the IEEE Transactions.

  33. Joseph Ting
    Posted September 10, 2012 at 6:10 pm | Permalink

    I recommend that you read
    “Peer review a way of silencing troublemakers, Brendan O’Neill, Inquirer, The Weekend Australian, Sept 8-9 2012″ for a jaundiced diatribe about its conclusion altering biases re climate change

    The jaundiced view of peer review is amenable to post-publication amendment however, if the journal agrees to publish a contentious manuscript…it generates lively debate

    However you still need to get it past the gate keeper

    Brendan O’Neill overplays the fallibility of peer review to political agenda, personal opinion, exaggerated findings, methodological weaknesses and a mutual return of favour. I am heartened by quality assurance research demonstrating the robustness of biomedical journal peer review, with only one error detected every 2000 hours of expert appraisal and attempts to replicate study findings. The rare error that is published, having escaped the scrutiny of peer reviewers, could be dealt with in the journal’s correspondence pages; O’Neill could mention Letters to the Editor pages as a forum for lively multilateral debate. What could be done better is for reader scepticism, contrarian opinion and author counter-responses to be automatically published and appended to the original study, to maintain balance and avoid accusations of bias. Furthermore, the decision to air provocative dialogue between opposing camps should not be solely at a journal editor’s discretion.

  34. Stefano Solarino
    Posted May 23, 2013 at 5:25 am | Permalink

    Hi there. I recently wrote a short paper about the limitations that publishing on specific journals, in the search for citations and high impact, may introduce.
    The paper is tailored to my field of science (Geophysics) but I am quite sure that the topics discussed within the article apply to many other fields.
    The paper can be downloaded at the address http://www.annalsofgeophysics.eu/index.php/annals/article/view/5518/6048

  35. Posted July 13, 2013 at 12:47 am | Permalink

    I couldn’t agree more that peer review, in its present form, is broken. The only thing I don’t like about the proposals is the emphasis on speed. It doesn’t matter a damn if it takes a year for a paper to be published. It would give one more time to think about the contents.

    The real problem arises from the intense pressure to publish, and that comes from senior scientists (or ex-scientists) themselves. One of the more gratifying experiences of my life was when I saw a scientist rejected as a candidate for the Royal Society (the UK National Academy of Science) because they had published over 400 papers. It was thought, quite rightly in my opinion, that to accumulate so many papers they author must have been putting their name on papers to which they had made little contribution. I have come across cases where authors had not even read papers on which their name appears.

    Another bad effect of publishing mania is that there are so many papers being submitted that it’s impossible to find enough people with the ability to review them properly.

    It’s hard to find a solution to this publishing mania, but I’d advocate an upper limit of one or two full experimental papers a year. I suspect that this would lead to an increase in quality and it would lead to a decrease in the pressure on reviewers.

  36. Researcher
    Posted October 6, 2013 at 9:04 am | Permalink

    Who needs journal publications after all? Scientists? No.

    Today, they can just put their paper on their homepage, and send a note to their scientific forum so that their colleagues will see it (and possibly link to it), and the search engines will find it (especially when more people link it).
    Journals are not helping the real scientists at all, they are standing in their way.

    The real problem lies somewhere else.

    The real problem is “put my name on” attitude of all people in the academia. Compare a solution provided by someone from industry and from the academia. In 90% of the former cases – you will not see the name of the person who did the job. In 100% of the latter, the name will be there.

    People in the academia are NOT working for science and research.

    THEY ARE WORKING FOR THEIR NAME.

    That is the core of the whole problem.

    If the institutions in the academia would start doing research, instead of providing a platform for fame-seeking individuals, this problem would have never existed.

    I have not seen a single institution of the sort. Have you?

26 Trackbacks

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*

You may use these HTML tags and attributes <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>