Coupling Pre-Prints and Post-Publication Peer Review for Fast, Cheap, Fair, and Effective Science Publishing

Leslie Vosshall and I have written the following white paper as a prelude to the upcoming ASAP Bio meeting in February aimed at promoting pre-print use in biomedicine. We would greatly value any comments, questions or concerns you have about the piece or what we are proposing.


[PDF Version]

Coupling Pre-Prints and Post-Publication Peer Review for Fast, Cheap, Fair, and Effective Science Publishing

Michael Eisen1,2 and Leslie B. Vosshall 3,4

1 Department of Molecular and Cell Biology and 2 Howard Hughes Medical Institute, University of California, Berkeley, CA. 3 Laboratory of Neurogenetics and Behavior and 4 Howard Hughes Medical Institute, The Rockefeller University, New York, NY.

mbeisen@berkeley.edu; leslie@rockefeller.edu

Scientific papers are the primary tangible and lasting output of a scientist. It is how we communicate our discoveries, and how we are evaluated for hiring, promotion, and prizes. The current system by which scientific papers are published predates the internet by several hundred years, and has changed little over centuries.

We believe that this system no longer serves the needs of scientists.

  1. It is slow. Manuscripts spend an average of nine months in peer review prior to publication, and reviewers increasingly demand more data and more experiments to endorse a paper for publication. These delays massively slow the dissemination of scientific knowledge.
  2. It is expensive. We spend $10 billion a year on science and medical journal publishing, over $6,000 per article, and increasingly these costs are coming directly from research grants.
  3. It is arbitrary. The current system of peer review is flawed. Excellent papers are rejected, and flawed papers are accepted. Despite this, journal name continues to be used as a proxy for the quality of the paper.
  4. It is inaccessible. Even with the significant efforts of the open-access publishing movement, the vast majority of scientific literature is not accessible without a subscription.

In view of these problems, we strongly support the goal of ASAP Bio to accelerate the online availability of biomedical research manuscripts. If all biomedical researchers posted copies of their papers when they were ready to share them, these four major pathologies in science publishing would be cured.

The goal of ASAP Bio to get funders and other stakeholders to endorse the adoption of pre-prints is laudable. But without fundamental reform in the way that peer review is carried out, the push for pre-prints will not succeed. An important additional goal for the meeting must therefore be for funders to endorse alternative mechanisms for carrying out peer review. Such mechanisms would operate outside of the traditional journal-based system and focus on assessing the quality, audience, and impact of work published exclusively as “pre-prints”. If structured properly, we anticipate that a new system of pre-print publishing coupled with post-publication peer review will replace traditional scientific publishing much as online user-driven reviews (Amazon, Yelp, Trip Advisor, etc.) have replaced publisher-driven metrics to assess quality (Consumer Reports, Zagat, Fodor’s, etc.).

In this white paper we explain why the adoption of pre-prints and peer review reform are inseparable, outline possible alternative peer review systems, and suggest concrete steps that research funders can take to leverage changes in peer review to successfully promote the adoption of pre-prints.

Pre-prints and journal-based peer review can not coexist

The essay by Ron Vale that led to the ASAP Bio meeting is premised on the idea that we should use pre-prints to augment the existing, journal-based system for peer review. In Vale’s model, biomedical researchers would post papers on pre-print servers and then submit them to traditional journals, which would review them as they do today, and ultimately publish those works they deem suitable for their journal.

There are many reasons why such a system would be undesirable – it would leave intact a journal system that is inefficient, ineffective, inaccessible, and expensive. But more proximally, there is simply no way for such a symbiosis between pre-prints and the existing journal system to work.

Pre-print servers for biomedicine, such as BioRxiv, run by the well-respected Cold Spring Harbor Press, now offer biomedical researchers the option to publish their papers immediately, at minimal cost. Yet biologists have been reluctant to make use of this opportunity because they have no incentive to do so, and in many cases have incentives not to. If we as a biomedical community want to promote the universal adoption of pre-prints, we have to do more than pay lip-service to the potential of pre-prints, we have to change the incentives that drive publishing decisions. And this means changing peer review.

Why are pre-prints and peer review linked? Scientists publish for two reasons: to communicate their work to their colleagues, and to get credit for it in hiring, promotion and funding. If publishing behavior were primarily driven by a desire to communicate, biomedical scientists would leap at the opportunity to post pre-prints, which make their work available to the widest possible audience at the earliest possible time at virtually no cost. That they do not underscores the reality that, for most biomedical researchers, decisions about how they publish are driven almost entirely by the impact of these decisions on their careers.

Pre-prints will be not be embraced by biomedical scientists until we stop treating them as “pre” anything, which suggests that a better “real” version is yet to come. Instead, pre-prints need to be accepted as formally published works. This can only happen if we first create and embrace systems to evaluate the quality and impact of, and appropriate audience for, these already published works.

But even if we are wrong, and pre-prints become the norm, we would still need to create an alternative to journal based peer review. If all, or even most, papers are available for free online, it is all but certain that libraries would begin to cut subscriptions and traditional journal publishing, which still relies almost exclusively on revenue from subscriptions, would no longer be economically viable.

Thus a belief in the importance of pre-print use in biomedicine requires the creation of an alternative system for assessing papers. We therefore suggest that the most important act for funders, universities, and other stakeholders is not to just endorse the use of pre-prints in biomedicine, but to endorse the development and use of a viable alternative to journal titles in the assessment of the quality, impact, and audience of works published exclusively as “pre-prints”.                                                                                                

Peer review for the Internet Age

The current journal-based peer review system attempts to assure the quality of published works; help readers find articles of import and interest to them; and assign value to individual works and the researchers who created them. Post-publication peer review of works initially published as pre-prints can not only replicate these services, but do it faster, cheaper and more effectively.

The primary justification for carrying out peer review prior to publication is that this prevents flawed works from seeing the light of day. Inviting a panel of two or three experts to assess the methods, reasoning, and presentation of the science in the paper, undoubtedly leads to many flaws being identified and corrected.

But any practicing scientist can easily point to deeply flawed papers that have made it through peer review in their field, even in supposedly high-profile journals. Yet even when flaws are identified, it rarely matters. In a world where journal title is the accepted currency of quality, a deeply flawed Science or Nature paper is still a Science or Nature paper.

Prepublication review was developed and optimized for printed journals, where space had to be rationed to balance the expensive acts of printing and shipping a journal. But today it is absurd to rely solely on the opinions of two or three reviewers, who may or may not be the best qualified to assess a paper, who often did not want to read the paper in the first place, who are acting under intense time pressure, and who are casting judgment at a fixed point in time, to be to sole arbiters of the validity and value of a work. Post-publication peer review of pre-prints is scientific peer review optimized for the Internet Age.

Beginning to experiment with systems for post-publication review now will hasten its development and acceptance, and is the quickest path to the universal posting of pre-prints. In the spirit of experimentation, we propose a possible system below.

A system for post-publication peer review

First, authors would publish un-reviewed papers on pre-print servers that screen them to remove spam and papers that fail to meet technical and ethical specifications, before making them freely available online. At this point peer review begins, proceeding along two parallel tracks.

Track 1: Organized review in which groups, such as scientific societies or self-assembling sets of researchers, representing fields or areas of interest arrange for the review of papers they believe to be relevant to researchers in their field. They could either directly solicit reviewers or invite members of their group to submit reviews, and would publish the results of these reviews in a standardized format. These groups would be evaluated by a coalition of funding agencies, libraries, universities, and other parties according to a set of commonly agreed upon standards, akin to the screening that is done for traditional journals at PubMed.

Track 2: Individually submitted reviews from anyone who has read the paper. These reviews would use the same format as organized reviews, and would, like organized reviews become part of the permanent record of the paper. Ideally, we want everyone who reads a paper carefully to offer their view of its validity, audience, and impact. To ensure that the system is not corrupted, individually submitted reviews would be screened for appropriateness, conflicts of interest, and other problems, and there would be mechanisms to adjudicate complaints about submitted reviews.

Authors would have the ability at any time to respond to reviews and to submit revised versions of their manuscript.

Such a system has many immediate advantages over our current system of pre-publication peer review. The amount of scrutiny a paper receives will scale with the level of interest in the paper. If a paper is read by thousands of people, many more than the three reviewers chosen by a journal are in a position to weigh in on its validity, audience, and importance. Instead of only evaluating papers at a single fixed point in time, the process of peer review would continue for the useful lifespan of the paper.

What about concerns about anonymity for reviewers? We believe that peer review works best when it is completely open and reviewers are identified. This both provides a disincentive to various forms of abuse, and allows readers to put the review in perspective. We also recognize that there are many scientists who would not feel comfortable expressing their honest opinions without the protection of anonymity. We therefore propose that reviews be allowed to remain anonymous provided that one of the groups defined in Track 1 above vouch for their lack of conflict and appropriate expertise. This strikes the right balance between providing anonymity to reviewers while protecting authors from anonymous attacks.

What about the concern of flawed papers being published, or being subject to misuse and misinterpretation while they are being reviewed? We do not consider this to be a serious problem. The people in the best position to make use of immediate access to published papers – practicing scientists in the field of the paper – are in the best position to judge the validity of the work themselves and to share their impressions with others. Readers who want external assessment of the quality of a work can wait until it comes in, and are those no worse off than they are in the current system. If implemented properly, such a system would get the best of both worlds – rapid access for those who want and need it, and quality control over time for a wider audience.

Assessing quality and audience without journal names

The primary reason the traditional journal-based peer review system persists despite its anachronistic nature is that the title of the journal in which a scientific paper appears reflects the reviewers’ assessment of the appropriate audience for the paper and their valuation of its contributions to science. There is obviously value in having people who read papers judge their potential audience and impact, and there are many circumstances where having an external assessment of a scientist’s work can be of use. But there is no reason we have to use journal titles to convey this information.

It would be relatively simple to give reviewers of published pre-prints a set of tools to specify the most appropriate audience for the paper, to anticipate their expected level of interest in the work, and to gauge the impact of the work. We can also take advantage of various automated methods to suggest papers to readers, and for such readers to rate the quality of paper by a set of useful metrics. Systems that use the Internet to harness collective expertise have fundamentally changed nearly every other area human society – it’s time for them to do the same for science.

Actions

A commitment to promoting pre-prints in biomedicine requires a commitment to promoting a new system for reviewing works published initially as un-reviewed pre-prints. Such systems are practical and a dramatic improvement over the current system. We call on funders and other stakeholders to endorse the universal posting of pre-prints and post-publication peer review as inseparable steps that would dramatically improve the way scientists communicate their ideas and discoveries. We recognize that such a system requires standards, and propose that a major outcome of the ASAP Bio meeting be the creation of an “International Peer Review Standards Organization” to work with funders and other stakeholders to establish these criteria and to work through many of the important issues, and then serve as a sanctioning body for groups of reviewers who wish to participate in this system. We are prepared to take the lead in assembling an international group of leading scientist to launch such an organization.

This entry was posted in open access. Bookmark the permalink. Both comments and trackbacks are currently closed.

25 Comments

  1. Ben Peter
    Posted January 21, 2016 at 9:39 am | Permalink

    I agree with the main points that i) the current system is self-propagating because people perceive high-impact-factor journals help in getting hired, ii) hardly anyone reads journals from front-to-back, and tools like google scholar recommendations are often much more useful to find relevant literature and iii) any qualified scientist is able to judge the merits of a paper in her field, regardless of the paper.

    What I am unclear about is whether there is a role for junior researchers (like me) in the change, besides posting preprints. Given the current incentives where support for the traditional model comes from funding agencies and hiring institution, the proposed ‘top-down’ approach initiated by leading scientists seems sensible, but is there something students/postdocs can contribute?

    • Posted January 21, 2016 at 11:32 am | Permalink

      Hi Ben – another incentive for using preprints would be assurance that they represent a legitimate form of scientific communication. I think a lot of people hesitate to use them for fear of smoking out competitors who might rush to get their work into a traditional journal first (and therefore receive credit/citations). To counteract this, the whole community (including junior scientists like you & me) could commit to evaluating and citing relevant preprints where appropriate in our own manuscripts.

      • Posted January 22, 2016 at 7:14 am | Permalink

        It is better than that! If a mechanism of open peer review is added on pre-prints as proposed in the blog post, it will allow junior researchers to freely and individually take “by force” their legitimate share of influence and visibility, through reviewing. They would no longer only be the coauthor of their PI and their success would depend more on valuable individual actions.
        For instance, for PhD students or post-docs looking for their next position, a skilled and insightful PPPR of an article by the lab they are interested in would be the best way to get in touch.
        More generally, opening the process of science is an amazing opportunity for the majority of scientists whose only fate today is to produce articles and watch the show. They get an individuality and a voice.

  2. Posted January 21, 2016 at 10:11 am | Permalink

    It’s a great piece and I support this proposal. If there is one omission, is a reference to the physics-focused predecessor of bioRxiv, the ArXiv. Why very successful in attracting pre-prints, it doesn’t seem to have impacted drastically traditional publishing in physics and how hiring and career progress decisions are made (I am not a physicist, so I may be unaware of minor shifts). So it is very relevant to your proposal: it supports the idea that without peer-reviewing, an archive by itself is not enough. It also shows that some groups of scientists, albeit not biologists, at least currently, are willing to disseminate research without immediate tangible career and funding feedback.

    • Michael Manhart
      Posted January 22, 2016 at 1:21 am | Permalink

      I agree that arXiv is very relevant to this discussion, since it is the best evidence we have of how preprints can be widely embraced by a whole field of science, and what the consequences of that are.

      However, as a physicist I would say it is untrue that arXiv has not “impacted drastically traditional publishing in physics and how hiring and career progress decisions are made.” It depends somewhat on the subfield of physics — journals still matter a lot in some — but especially in particle physics, posting a paper on arXiv really *is* the “primary act of publication” that Michael Eisen has previously stated should be our goal (http://www.michaeleisen.org/blog/?p=1733). For particle physicists, once it’s on arXiv, it’s official and citable. (It’s also the submission that defines being first for the purposes of scooping.) Most papers are still published in journals later, but that’s mostly a formality, and occasionally some authors never get around to it anyway because there’s little point. Indeed, arXiv’s copy is largely considered the version of record — even if a paper is formally published in a journal later, citations to it usually still include the arXiv ID, and most people will download the PDF from there rather than the journal website.

  3. Posted January 21, 2016 at 10:12 am | Permalink

    It’s a great piece and I support this proposal. If there is one omission, is a reference to the physics-focused predecessor of bioRxiv, the ArXiv. While very successful in attracting pre-prints, it doesn’t seem to have impacted drastically traditional publishing in physics and how hiring and career progress decisions are made (I am not a physicist, so I may be unaware of minor shifts). So it is very relevant to your proposal: it supports the idea that without peer-reviewing, an archive by itself is not enough. It also shows that some groups of scientists, albeit not biologists, at least currently, are willing to disseminate research without immediate tangible career and funding feedback.

  4. Bill Hooker
    Posted January 21, 2016 at 10:45 am | Permalink

    This is me being kind of an asshole but I don’t have time to be nice. So take anything useful and disregard the rest. Tl;dr you are overthinking this.

    “Such mechanisms would operate outside of the traditional journal-based system and focus on assessing the quality, audience, and impact of work published exclusively as “pre-prints”. If structured properly, we anticipate that a new system of pre-print publishing coupled with post-publication peer review will replace traditional scientific publishing much as online user-driven reviews (Amazon, Yelp, Trip Advisor, etc.) have replaced publisher-driven metrics to assess quality (Consumer Reports, Zagat, Fodor’s, etc.).”

    1. Why assess impact in this system, when it is perhaps the single worst, most pernicious, least scientific aspect of the existing pre-pub system?? It’s like throwing out the baby and keeping the bathwater.

    2. Bad analogy alert: Consumer Reports et al. are alive and well, and almost all online user reviews are useless. Of the three you mentioned, only Amazon reviews are (sometimes) useful, and then only when there is either one very good reviewer, or a great many reviews (hundreds or thousands) so that the average rating has some meaning.

    —–

    “In Vale’s model, biomedical researchers would post papers on pre-print servers and then submit them to traditional journals, which would review them as they do today, and ultimately publish those works they deem suitable for their journal.

    There are many reasons why such a system would be undesirable – it would leave intact a journal system that is inefficient, ineffective, inaccessible, and expensive. But more proximally, there is simply no way for such a symbiosis between pre-prints and the existing journal system to work.”

    I know you know that Vale’s model is largely the way physics and mathematics work today. To make your case, you need to explain why something that works for those fields won’t work for another (viz. for biomed).

    —–

    “authors would publish un-reviewed papers on pre-print servers that screen them to remove spam and papers that fail to meet technical and ethical specifications”

    How would that screening work, and who would pay for it? Do we have a way to estimate the cost?

    —–

    “Track 1: Organized review in which groups, such as scientific societies or self-assembling sets of researchers”

    Never gonna happen (see cats, herding of), and if it did it would be the Glam bullshit all over again.

    —–

    “To ensure that the system is not corrupted, individually submitted reviews would be screened for appropriateness, conflicts of interest, and other problems, and there would be mechanisms to adjudicate complaints about submitted reviews.”

    As above, how will this screening work, what will it cost, and who will pay?

    —–

    “We therefore propose that reviews be allowed to remain anonymous provided that one of the groups defined in Track 1 above vouch for their lack of conflict and appropriate expertise.”

    A joke, yes? Where do you think Dr BSD, the exact same vindictive asshole that AnonPostDoc is trying not to get fucked over by, is going to be putting in his efforts? That’s right, Track 1, the “prestige” track.

    —–

    “It would be relatively simple to give reviewers of published pre-prints a set of tools to […] gauge the impact of the work.”

    Utter bollocks, see above.

    —–

    “We recognize that such a system requires standards, and propose that a major outcome of the ASAP Bio meeting be the creation of an “International Peer Review Standards Organization” to work with funders and other stakeholders to establish these criteria and to work through many of the important issues, and then serve as a sanctioning body for groups of reviewers who wish to participate in this system.”

    No, no, no please no. No more fucking committees and no more fucking standards outside of software. This is a complete waste of time. If peer review needs standards, it needs them now; since it has limped along without them so far, let’s just not.

    • Posted January 22, 2016 at 8:27 am | Permalink

      On the question of “who pays?” If one could wave a magic wand and have the transformation happen all at once, the research $$ in overhead that currently goes to pay for subscription fees could be re-routed. All of these things (digital paper hosting, organization of review etc) are already being paid for, it’s just that the costs are hidden and there are probably massive invisible inefficiencies (from a systems perspective). Also, if reviews are in the open and given prestige, there’s an incentive for scientists to engage in it rather than it being a duty that authors do because they want to stay in the good graces of editors.

      Of course, my University library can’t just stop all subscription fees and funnel that towards a pre-print server – there’s huge institutional inertia. This is why funding agencies need to take the lead.

  5. Ian McLachlan
    Posted January 21, 2016 at 10:46 am | Permalink

    “It would be relatively simple to give reviewers of published pre-prints a set of tools to specify the most appropriate audience for the paper, to anticipate their expected level of interest in the work, and to gauge the impact of the work. We can also take advantage of various automated methods to suggest papers to readers, and for such readers to rate the quality of paper by a set of useful metrics.”

    I support pre-prints and I am intrigued by the solutions PPPR provides, but I am also very skeptical about crowd effects on PPPR. You don’t suggest any checks against social forces influencing these “metrics”. Since popularity (or anti-popularity) would be a major driver for review, it seems like the ultimate result would be to reproduce the prestige journal system, but instead of editors as gatekeepers, you turn the whole thing into Reddit… which would be, in my opinion, worse than the status quo. Can you expand on this point?

    • Posted January 22, 2016 at 8:30 am | Permalink

      There are areas of reddit (like /r/askscience) that are wonderful, and the best content really does get to the top. This takes active and effective moderation, and it takes community engagement, but it can definitely be done. It strikes me that the scientific community should be able to manage this.

  6. Posted January 21, 2016 at 10:59 am | Permalink

    I am strongly supportive of preprints, but I’d be reluctant to see funders endorse a specific process of pre-print posting and post-publication review at this time.

    Peer review provides a distributed trust network. This provides value: we expect that items that have passed through peer review have received some level of scrutiny. The current peer review system was developed over time in an era where dissemination was expensive and content could be challenging to discover. I agree that it is not well suited to the current era, when these characteristics no longer describe the current state of publishing.

    Preprints provide a new means of dissemination. I agree with the benefits that you’ve raised, and I’m strongly supportive of preprints.

    That said, I think that we can do better than creating a mandatory preprint policy based on the current status quo. Right now, preprints still feel bolted on to the traditional peer review process. I am worried that mandates at this point will stifle the innovations that could enable an improved method for tracking the type of distributed trust that peer review provides.

    New tools like hypothes.is allow for detailed annotation of documents. Combining annotation with investigator IDs provided by ORCID or other groups may provide an opportunity to innovate in the area of peer review. If your proposal comes to pass and funders mandate either a Track 1 or Track 2 approach to peer review, we may lock ourselves into choices that preclude the development a Track 3 approach that the burgeoning array of publication practices is beginning to enable.

  7. Posted January 21, 2016 at 1:19 pm | Permalink

    Some thoughts on this stuff (some in reaction to the paper and some pre-existing ideas):

    Track 1 sounds a lot like the “overlay journal” concept. I think those are a great idea. You might want to specifically reference the recent emergence of such journals (well, there’s only one that I know of at the moment).

    Whenever I think about this stuff, ORCID always seems like it should play a key role in any system like this. It’s a pre-existing, transparent way to keep track of authors, reviewers, editors, and so on. It may even be possible to automatically flag some COI issues by looking at co-authorship and institutional relationships. It’s probably a little early to commit to a specific technology at this point, but some standardization is necessary somewhere down the line.

    For cases when anonymity is desirable, it could be possible to keep a reviewer’s ORCID secret from the authors but known to some sort of administrative group (the “editors” of an overlay journal, for instance). If an anonymous review is determined to be abusive, there are people who can look into whether there is a pattern of behavior from a person.

    When it comes to “mass review” of papers, I kind of hate to suggest it but content-aggregation sites like Reddit might provide a model that we could use. Lots of aspects of them are toxic, but a lot of that has to do with anonymity on the Internet. A site with crowd-sourced voting tied to the public IDs of researchers (e.g. ORCID) would probably be a little more civil.

  8. Posted January 21, 2016 at 3:37 pm | Permalink

    Thanks Mike. I think something along these lines is inevitable (eventually).

    Your post reminded me of something Tal Yarkoni wrote a few years back that might be worth checking out: http://www.talyarkoni.org/blog/2011/08/23/building-better-platforms-for-evaluating-science-a-request-for-feedback/

    Abstract:

    Traditional pre-publication peer review of scientific output is a slow, inefficient, and unreliable process. Efforts to replace or supplement traditional evaluation models with open evaluation platforms that leverage advances in information technology are slowly gaining traction, but remain in the early stages of design and implementation. Here I discuss a number of considerations relevant to the development of such platforms. I focus particular attention on three core elements that next-generation evaluation platforms should strive to emphasize, including (a) open and transparent access to accumulated evaluation data, (b) personalized and highly customizable performance metrics, and (c) appropriate short-term incentivization of the userbase. Because all of these elements have already been successfully implemented on a large scale in hundreds of existing social web applications, I argue that development of new scientific evaluation platforms should proceed largely by adapting existing techniques rather than engineering entirely new evaluation mechanisms. Successful implementation of open evaluation platforms has the potential to substantially advance both the pace and the quality of scientific publication and evaluation, and the scientific community has a vested interest in shifting towards such models as soon as possible.

  9. jsrsa
    Posted January 21, 2016 at 5:13 pm | Permalink

    I’m curious to see how Drs. Eisen and Vosshall address the issues of community management and creating the infrastructure for the system proposed here. That being said, the authors haven’t yet had time to address concerns about implementation and some of the finer points of the paper.

    I try to not be in the business of proposing problems though and some comments are hard to understand outside of a real-world context. Below then is a hypothetical story that outline’s my idea of how the two systems of publication would likely synthesize. I’d be curious about anyone’s feedback on which system seems to dominate especially:

    Babara is a post-doc who is ready to publish her initial paper for peer review. After logging on to the appropriate website she is able to upload her paper. Immediately several other researchers in her field receive email alerts from the website that Barbara has just published. Wanting to increase their personal reputation on the website they log on over the next week or two and begin writing constructive reviews and asking questions.

    As they write and send their respective peer reviews or questions to Barbara, she thanks them by adding responses and closing the cases surrounding each question. In turn the reviewers’ reputation score on the website is increased. As Barbara adds in her references other members of her scientific discipline take note as their reputation score on the site increases, and they are drawn to see whose citing them some even choosing to join in the review process.

    When she’s ready, Barbara decides to submit a final draft to one of the smaller, community run channels on the website. After a short time she begins to get alerts as others start citing her paper.

    Amongst all this activity, one of the curated channels on the website run by Nature asks to feature her paper. Granting permission, she gains substantial exposure and her reputation score increases while the views help generate ad revenue for the channel.

    This situation isn’t truly hypothetical. The bulk of the story’s points are actually executed on by the ResearchGate website (https://www.researchgate.net/home). Additions were pulled from areas like Youtube channel structure, Reddit content management, and pre-printing on a free platform such as bioRxiv (http://biorxiv.org/).

    Using these existing ideas and technologies could then serve as a potential stepping stone towards a more inclusive research publication atmosphere. I’d be interested to hear what others have to say on this idea and possibly Dr. Eisen’s thoughts on such a system since the parts to get started are readily available and still seem to be relatively in line with the goals of making a system that is more accessible.

  10. Posted January 21, 2016 at 5:38 pm | Permalink

    Thanks everyone for your comments, questions and concerns. I’m swamped with teaching and other responsibilities for the next few days, but I promise we will address these when I get my head back above water next week. Would also welcome broader thoughts people have on how to make preprints take off in biomedicine and be more useful.

  11. Posted January 21, 2016 at 11:54 pm | Permalink

    Very solid proposal. Three quick thoughts.

    1. You are proposing a publishing platform, with many similarities to F1000Research. The biggest difference is the cost; F1000Research charges $1,000-$2,000 per article. Naturally, that’s lethal for adoption. The low-cost nature of what you propose is a must and is a big difference.

    2. As I read it, you seem to be striking the perfect balance with respect to anonymity of the peer reviewers. You are encouraging open non-anonymous review, while acknowledging that there must be a moderated mechanism for those who request the anonymous option. That’s another important distinction from F1000Research.

    3. I disagree that bioRxiv adoption is slow. Considering its resources, I’d say it has been growing very well. Monthly submissions were 38 in December 2013, 91 in December 2014, 227 in December 2015. Keep in mind that unlike PLOS, there has been no huge grant to CSHL’s bioRxiv. There isn’t a huge team. There is no concerted outreach, as far as I can tell. It’s entirely “organic” growth, and given that, the growth is solid. To put this in perspective, despite star power, heavy outreach, and massive venture capital, PeerJ growth for the same months was 13, 27, 68 articles. With a simple ~$1m grant for outreach/promotion, preprints could “take off”.

    • Posted January 24, 2016 at 9:11 am | Permalink

      Just to correct the fees quoted above, F1000Research actually charges $150-1000; we charge a $1000 surcharge in the rare case of extremely long articles – over 8000 words – because they are much more expensive to process and it is typically very hard to get researchers to agree to peer review something so long. A lot of our submissions are actually $150 or $500, and in fact, our charges did not turn out to be “lethal for adoption”.
      The F1000Research process of course is much more than just posting a preprint, as it also includes the post-publication formal peer review process, similar in many ways to that described above. The charges cover the initial checks prior to publication; dealing with the deposition, visibility and usability of the relevant data (a requirement when publishing in F1000Research); typesetting the article to provide the XML/PDF required for PubMed etc; and actually one of the most expensive parts of the process is managing peer review. Our charges are considerably cheaper than most other major (gold) open access publishers, and money to cover these costs are included in most researchers’ grants (in biomedicine at least).
      Competing Interests: I am Managing Director of F1000, which publishes F1000Research.

  12. Dylan
    Posted January 22, 2016 at 1:48 am | Permalink

    My attempts to upload articles to preprint servers, even articles that have been rejected etc., have been opposed by the other co-authors who don’t see it as “proper”.

  13. Posted January 22, 2016 at 6:35 am | Permalink

    Part 1/2

    Dear Michael,
    Please note that there is a novel platform that fully embraces your approach to fix scientific publishing: http://www.sjscience.org, whose principle is explained . You might want to dig into it further.

    Here are a few comments.

    « But today it is absurd to rely solely on the opinions of two or three reviewers, who may or may not be the best qualified to assess a paper, who often did not want to read the paper in the first place, who are acting under intense time pressure, and who are casting judgment at a fixed point in time, to be to sole arbiters of the validity and value of a work. »
    You are right, and I suggest helping to spread the term « peer trial » to describe this process. Peer trial certainly holds intellectual value and can be sometimes fair, but it cannot hold scientific value – when it is not as accountable as the article it is meant to validate. The term « peer review » should be reserved to a process having scientific value, i.e. open and verifiable, and which certainly cannot emerge in the private context of a journal as you rightfully say. This semantic notion may become important when defending these ideas, communication-wise.

    I wholeheartedly agree with your description of the current situation and I will only discuss your proposed actions.

    1) A system for post-publication review

    a) I dislike track 1. I think a pre-determined set of people deciding for all, what is worth reviewing and what is not, and that imposes its standards top-down always lead to conservative science. I do not think any scientist could represent another. There is a complex underlying question: who is an expert? Who is your peer? Science is always changing, and the ones who have achieved something in the past – whom we regard as experts – ironically, are likely to be the ones that have a vested interest in opposing tomorrow’s ideas; so the paradigm underlying their reputation stands still. I think experts dynamically define themselves : first we should let scientists talk and debate, then we can see who understands better, and who is more relevant.

  14. Michael Hendricks
    Posted January 22, 2016 at 7:51 am | Permalink

    I think curation is a much bigger deal than technical or “impact” reviewing, which obviously is largely noise + bias in the current system anyway. Most people are wary of letting go of journals, I think, because of the sense that they provide some manageable subset of the literature that has been hand picked. Journals are the payola-tainted DJs of the past. With the internet, no one gives a shit about radio. What matters is getting curated: your label might matter a little, but having Pitchfork put you on the “Best New Music” list matters a lot.

    So what we need are many, many energetic groups of curators…and we have them! A potential good model for Track 1-type curation of pre-prints is Law Reviews. Participation in law review by students is perceived as a great CV item, and different review groups organized by interest, program, etc., would produce a huge and diverse population of energetic reviewers.

    Student review groups would curate pre-prints in their interest area, release them ad hoc or as monthly “issues” from their group alongside commentary, questions, etc. Commentaries could be non-anonymous or attributed to the group as a whole, and could include differences of opinion.

    Financial support for these groups’ activities (and the necessary IT infrastructure, which is minimal) can be provided by the newly-wealth university libraries who have cancelled their subscriptions.

    Yes, this will inevitably reinvent some prestige bullshit, perhaps–I’m sure the Harvard Law Review is a more prestigious place to have an essay than others. There is no social activity in which things like this won’t happen. But another lesson from music and the internet: Pitchfork was an online fanzine 20 years ago, now it is far more influential among people who care about music than Rolling Stone. Democratizing scientific curation would mean the old establishment can’t lazily maintain the status quo through financial or prestige-based influence–someone better will knock you off the mountain.

    And opening this up so that essentially any group can make themselves a “journal” from a curatorial standpoint democratizes the process hugely. I would believe far more in trainees as gatekeepers than I do in professional editors, particularly among the hopelessly tangled pedigrees of elite institutions, labs, and journals. Finally, I don’t think the benefits of reducing the for-profit journal publishers to rubble can be overstated.

  15. Evan Heller
    Posted January 23, 2016 at 12:24 pm | Permalink

    Excellent! But, I would like fuller consideration of the merits of the current system.

    1. Has the system by which scientific papers are published really not changed in several hundred years? As science has advanced and the amount that one can accomplish in a short period of time with it, so has our concept of what constitutes a “study.” The demands of publication and the system of peer review have evolved along with this (or were even invented because of this– isn’t peer review a fairly modern invention?) One could argue that you’re advocating a return to the “Wild West” of publication 100 years ago, better suited to an era of much lower scientific output. Less scrutiny before publication, probably less complete/more speculative work, etc.

    2. Along these lines, is it really your experience that peer review is “arbitrary?” An editor’s decision to send a paper out for review, sure, that can be arbitrary. But the reviews themselves? Maybe they ask for more work and more time doing experiments than you’re willing to do, but, if you look at your paper a few months after you’ve finally published it, they usually do make your work if not better, at least more complete. The decision of whether your study is “complete” is valuable advice that your scientific peers should be weighing in on before you release it to the world! I would say, 2/3 reviewers really care about the work, invest a signficant amount of time reading and thinking about it, write many pages of feedback for you to consider, and, if you ran into them at a conference or the like, would probably openly tell that they were your reviewer! So, I’d avoid potential conceit that your work as you see it is “done,” “ready for publication,” or “good enough to be out there.”

    3. The real question (as you say!) is whether such reviewer input is better before or after publication. Is it better for more complete works to be published initially, or for them to be revised over time? Without pre-publication review, will there be incentive for groups to revise their work? Won’t the process of responding to the community take as much time as the current system of publication? The demands of the scientific community will not have changed, after all! And if you choose not to go back and revise your “pre-print” multiple times in response to community comments, won’t the quality of the scientific endeavor have suffered?

    4. Arguably, practicing scientists don’t use the journal title as a surrogate of quality, but as a surrogate of general interest, excitement, and relevance. (I would say reviewers do the same thing– if we’re going to spend the time reviewing a paper and writing up comments, we say whatever we’re going to say whether it’s for Nature or a “more specialized” journal.) Is this a bad thing? You’re a busy guy, you don’t necessarily have time to go through all the literature in all the journals, or all the pre-prints in the database. So what might you do? You think about credibility, who’s out there doing good work, what people at what universities (whose process of selecting faculty you feel is valuable)– has Leslie Vosshall published anything recently? If it’s OK to use personal reputation as a surrogate, or in an online system, some metric of interest, why is it not OK to use a journal’s reputation for publishing exciting things?

    5. Editors may be annoying, but they add some value. When I’ve written a paper, it’s always started off 2-3 times longer than a journal will allow. Being forced into some constraints has invariably made the work better (or at least more concise). Editorial constraints arguably also need to come from professionals, not only scientists.

  16. Posted January 25, 2016 at 9:50 am | Permalink

    Thank you very much for taking the lead on this important issue. I agree that simply implementing pre-prints alongside the current publication system will only fix a few issues (publications speed, open access) and leave some of the most corrosive problems untouched. Widespread post-publication peer review would likely create powerful incentives to publish work that turns out to be correct, rather than putting emphasis on who came first and what is fashionable.

    Your proposed two-tier system to regulate PPPR is an interesting idea, but I have some concerns regarding feasibility and implementation. You currently give little detail about how such a system would look, in particular in respect to ‘track 1’. Painting a more detailed picture would surely help the discussion. How large would these groups be? Who would be represented and what precisely would it be that they are doing? What are the incentives to participate in track 1 and do a good job?

    At present you are proposing to set up an elaborate system that involves several layers and oversight bodies. This would probably take a long time, if it would happen at all. Furthermore, you propose that such bodies are launched by ‘groups of leading scientists’. Leading scientists are what they are because they have benefited from the current system and would potentially lose influence if things were done differently. As a result the appetite for change among this group is small. If those in power generally would want change it would have arrived by now. While I think it is very important that progressive eminent scientists take the lead in bringing about changes to the culture of science, thereby adding visibility and credibility to the process, early career scientists are likely to be the main driver of change. Therefore, any initiative should be aiming to engage all scientists, whether young or old, famous or not.

    To avoid change being postponed until the current generation of leading scientists dies out, I would like to see more immediate actions to implement PPPR. These could be done in parallel with setting up the more formalised structures that you propose. Here are two suggestions for actions in the short and medium term:

    Get people posting PPPRs:
    People are not engaging in PPPR because nobody does. Most people I know have not made a conscious choice to not post PPPRs – they just don’t consider the possibility. That would change if a significant number of reviews would start to appear. Reviews by “leading scientists” would be particularly valuable, as they would attract much attention. So when assembling a group of likeminded researchers wanting to promote PPPR, maybe the first step should be a commitment to provide reviews on published work on Biorxiv, PubMed Commons and elsewhere on a regular basis. Since the PPPR space is currently basically empty, any initiative that starts to fill that space could also set the tone on how such reviews should look like.
    If people see reviews appearing in their field, I am sure they would start to consider offering their own view. I also don’t think that at present there are absolutely no incentives to engage in PPPR. In particular for early career scientists writing thoughtful reviews could be a great way to achieve more visibility. So I think the most urgent need is to bring people to post reviews and to do that in a way that is visible to a large audience in order to gain some momentum.

    Create incentives for PPPR:
    You write about the need for the funders of science to commit to PPPR. Any commitment should be followed by concrete steps to create incentives for people to provide their view on published articles. For example, funding agencies could start requesting two publication lists in grant and fellowship applications. The primary publication list would summarise the papers of the applicant and the secondary list would outline her contributions to PPPR. Evaluation would be based on the primary publication list, but the secondary list could be used as an additional indicator in cases that are difficult to separate based on primary research contributions alone. Needless to say that nobody would like to submit an empty secondary publication list, or one that lists reviews that are of low quality.

  17. binay panda
    Posted January 26, 2016 at 6:52 pm | Permalink

    michael and leslie, thank you for taking the lead. i endorse it full heartedly, in both letter and spirit. in addition to what the draft suggests, and it does talk about the ‘set of useful metrics’ for reviewers, i would like to spell those out clearly. for example, each group/individual reviewer (both in track 1 and 2) will post their evaluation in a scale of 1-10 (10 being the best). i propose evaluation based on the following points (this can expanded based on others’ input).

    1. detailed methodology required for reproduction
    2. clarity of presentation
    3. findings that contributes substantially to the field
    4. new technique/tool/algorithm development
    5. availability of underlying data (including raw data), scripts and codes
    6. conclusions derived based on results presented

    based on the above metrics from the reviewers, the preprint server should provide a percentile score for each manuscript submitted (rank the preprint compared to other preprints submitted earlier, ideally this should be done compared to all published papers but it is hard to do that as the current system of peer-review does not provide a score independently for these metrics). the underlying software in the preprint server will calculate the scores (on individual metrics and overall) dynamically and update those as additional reviews get added.

    it would also be good to suggest sub-sections (i understand that this might be a bit early for this) for such a process. there will be overlapping among sub-specialties but the author(s) will have a choice to assign more than one sub-specialty to the preprint.

    finally, unless i missed, i am not clear whether you guys are proposing a separate pre-print server or talking to cshl press to incorporate the proposed system into biorxiv?

  18. Sean Patrick Santos
    Posted February 2, 2016 at 9:50 pm | Permalink

    I’m not sure that this is a clear enough thought to be actionable for this paper, but one point that I think often doesn’t get mentioned is the impact on “interdisciplinary” research. (I use scare quotes because discipline boundaries are fuzzy anyway; I’m a student in applied mathematics and former scientific software engineer, and in neither situation has there been a rigid boundary between my activities and physical scientists’.)

    There’s a clear boost to interdisciplinary work from open access (most obviously, I don’t have to worry about my institution specializing in X and having journal subscriptions for X, but not having good subscriptions for Y). More germane to this article, if I’m looking for information in a discipline that I don’t specialize in, journal titles mean essentially nothing to me. Of course, in that sort of situation, it’s always best to have a collaborator in that field that can help you navigate. However, it’s not always possible (or desirable) to have someone hold your hand every time you want to search for how people from another field solve some shared problem.

    If reviews were organized by self-selected groups of researchers, it would be simpler to survey certain types of literature across fields; e.g. one could more easily discover papers in the category “practical solutions to equations of type X”, without caring whether the solutions were done in the context of mathematics, ocean dynamics, ecology, material science, neurology, or whatever. Of course, there are groups and mailing lists that do this sort of thing already (“overlay journal” sorts of effort), but they are hampered by the fact that journals are the primary form of categorization, and these reading groups have to remix sets of papers after the fact.

    It is of note that most journal review is of limited value from this perspective. Neither editors nor reviewers necessarily know that someone in the building next door, on the same university campus, solved an almost identical problem a decade ago. Nor do they know whether or not someone next door would kill for even incremental progress on some similar problem. Nor do they know if the proposal under consideration has already been found to be a dead end in another field.

    To provide a point/counterpoint relating to earlier comments, it’s also often the case that one wants to cite an inspiring work of someone else… except that that work is also in review or in press (or not even submitted). The quicker that a paper becomes available, and the more opportunity there is to tweak it after that point, the easier it is to cite others’ work correctly.

    Lastly, I want to mention that while many people are disturbed by the idea of a “popularity contest” regarding papers (which is a mischaracterization I think), there are cases where a discipline can be so sharply divided that reviewers who are particularly interested in a subject are more likely to be biased than people who are qualified to judge a paper, but do not have a strong opinion on the specific topic at hand. Insofar as current peer review differs from the proposed system, it’s not certain that the current way of doing things is more objective. It comes down to which is worse; an editor’s tendency to pick reviewers that have spoken most about a topic (which often means those who have picked a side on any give debate), or self-selection by those who have a strong opinion.

    While I am generally in favor, my main concerns with this proposal would be:

    (a) We need some incentive for reviewers to actually… review. Particularly to review papers by new or unpopular researchers. Being asked by an editor must not be a *very* good incentive, because a lot of scientists take a very long time to respond to those requests. But posting something online without assigned reviewers could be even worse.

    Reviewers need an incentive not just to review papers, but to review papers that may be very, very bad in some cases. A brilliant review of a decent paper, even if that review is ultimately negative, is likely to get attention. But reviewing a bad paper, and saying that it is bad, is not going to get a lot of attention. This is similar to the problem of publishing negative results.

    (b) Language. Many people have trouble communicating well in scientific papers, either because they don’t speak the lingua franca of their discipline (probably English nowadays), or because they struggle with communicating in print in general. I think we would all agree that it’s best when scientists are good communicators, but I would not defend shutting capable people out of a discipline because of their native language, or difficulties with technical writing. If a journal is not assisting with this kind of editing, then we need to make it clear that a researcher needs to have assistance from their home institution with this sort of thing.

    We also need to be honest about the fact that this cost is more of a burden to less-wealthy institutions. I believe that the majority of low-income institutions would be better off if they had to employ copyeditors(/translators) rather than paying for journal subscriptions… but this is a tradeoff, not an unambiguous slam dunk for one side, and even assuming that we will transition to an open economy of science, this is a very real cost in the transition period.

  19. Sean Patrick Santos
    Posted February 2, 2016 at 11:09 pm | Permalink

    A few thoughts more relevant to the later comments:

    Michael Hendricks – “I would believe far more in trainees as gatekeepers than I do in professional editors, particularly among the hopelessly tangled pedigrees of elite institutions, labs, and journals.”

    I find this seductive. I would say that really dedicated “trainees” (which I assume means the whole pre-tenure class) are actually are the best at identifying widely applicable and well-written papers. My impression is that all papers that are loved by trainees are highly valuable. They may not be novel research, so trainees may like some papers that really should be textbook sections. Also, some poorly-written papers may still be highly-valuable in the long run, despite not appealing to early-career types. So I’m not going to give the young’uns a pass there.

    However, it’s easy to argue that “trainee-approved” papers are almost all valuable, at least as introductory papers, and possibly more. The “gatekeeper-approved” papers are probably more likely to have fluff that’s less innovative and less well-written, but got lucky in review, and in that sense, “gatekeepers” are actually worse at, well, gatekeeping.

    Evan Heller – “I would say, 2/3 reviewers really care about the work, invest a signficant amount of time reading and thinking about it, write many pages of feedback for you to consider, and, if you ran into them at a conference or the like, would probably openly tell that they were your reviewer!”

    Jaded as I am about peer review, I agree. Reviewers have often made great suggestions, in some cases even suggesting experiments or changes to data analysis that we could do on the spot to tighten up our results. (In some cases, those changes *ahem* overlapped with suggestions that I had for my coauthors, who luckily were always accommodating in such cases anyway.) I agree that we want to be clear that reviewers are often very conscientious (depending on field and luck). However, the “1/3” of cases (or however much it is) where a reviewer does very poorly can really weigh on a paper, not only because of the possibility that it will be rejected, but also because of the delay caused by interacting with a reticent reviewer.

    Evan Heller again – “If it’s OK to use personal reputation as a surrogate, or in an online system, some metric of interest, why is it not OK to use a journal’s reputation for publishing exciting things?”

    I would argue that this is confusing the idealistic question of the optimal outcome with the pragmatic question of the outcome that we can achieve. Personal reputation is *mostly* related to individual choice regarding whose work someone wants to look at, which is a personal thoughts-and-dreams question, so we can’t affect that directly. However, journal reputation relates to how scientists interact with an editorial body that they {ostensibly) control by direct interaction, and so we can control that.

    I would say that personal reputation bias does not excuse journal reputation bias, from a cultural/policy perspective. Just because we can’t fix one error doesn’t mean that we’re excused from addressing a similar error. (You could also argue that journal reputation is “worse” in the sense that it is a worse filter than researchers’ personal views on their field, but that’s a separate argument.)

    Fillip Port – “Leading scientists are what they are because they have benefited from the current system and would potentially lose influence if things were done differently. As a result the appetite for change among this group is small.”

    I like other parts of Filip’s comment, but I find this to be dubious. My experience is that many of the people who would be most receptive to change
    are the up-and-coming types (who by definition are expected to be tomorrow’s gatekeepers) and some older scientists who simply don’t think that they have much say in these matters. However, that experience *is* in atmospheric science, which has become one of the more idealistic disciplines of physics (because why else would you put up with the whole climate change debacle?).

    Nonetheless, I have yet to encounter a scientist who thinks as Fillip suggests (without hiding it well). I would rather endorse this quote:

    “Therefore, any initiative should be aiming to engage all scientists, whether young or old, famous or not.”

    A lot of the younger scientists know what’s up, and the older and mid-career scientists have room to stand up for the principles that they’ve had all along. *If* you can convince them that this change is for the better, which I assume is the game plan, I think that it will go quite well. It is not obvious that the “talk-to-your-advisor” effect will be small.

    Fillip Port again – “Reviews by “leading scientists” would be particularly valuable, as they would attract much attention.”

    I agree, though this drifts from what can be dealt with in this white paper.

6 Trackbacks