The Mission Bay Manifesto on Science Publishing

Earlier this week I gave a seminar at UCSF. In addition to my usual scientific spiel, I decided to end my talk with a proposal to UCSF faculty for action that could take make scholarly communication better. This is something I used to do a lot, but have mostly stopped doing since my entreaties rarely produce tangible actions. But I thought this time might be different. I was optimistic that recent attention given by prominent UCSF professor Ron Vale to the pervasive negative effects of our current publishing system might have made my UCSF faculty colleagues open to actually doing something to fix these problems.

So I decided to issue a kind of challenge to them to not just take steps on their own, but to agree collectively to take them together. My motivation for this particular tactic is that when I ask individual scientists to do things differently, they almost always respond that they would love to do things differently, but can’t because the current system requires that {they | their trainees | their collaborators} have to publish in {insert high profile journal here} in order to get {jobs | grants | tenure}. However, in theory at least, this reluctance to “unilaterally disarm” would go away if a large number of faculty, especially at a high-profile place like UCSF agreed to take a series of steps together. I focused on faculty – tenured faculty in particular – because I agree that all too often publishing reform efforts focus on young scientists, who, while they tend to be more open to new things, also are in the riskiest positions with respect to jobs, etc…

My goal was to address in one fell swoop three different, but related issues:

  1. Access. Too many people who need or want access to the scientific and medical literature don’t have it, and this is ridiculous. Scientists have the power to change this immediately by posting everything they write online for free, and by working to ensure that nothing they produce ever ends up behind paywalls.
  2. Impact Factors. The use of journal title and impact factors as surrogate for the quality of science and scientists. Virtually everyone admits that journal title is a poor indicator of scientific rigor, quality or importance, yet it is widely used to judge people in science.
  3. Peer-review. Our system of pre-publication peer-review is slow, intrusive, ineffective and extremely expensive.

And here is what I proposed (it’s named after the Mission Bay campus where I gave my talk):

The Mission Bay Manifesto

As a scientists privileged to work at UCSF we solemnly pledge to fix for future generations the current system of science communication and assessment which does not serve the interests of science or the public by committing to the following actions:

(1) We will make everything we write immediate freely available as soon as it is finished using “preprint” servers like arxiv.org, bioRxiv.org, or the equivalent. 

(2) No paper we write, or data or tools we produce, will ever, for even one second, be placed behind a paywall where they are inaccessible to even one scientists, teacher, student, health care provider, patient or interested member of the public. 

(3) We will never refer to journal titles when discussing my work in talks, on my CV, in job or grant application, or any other context. We will provide only a title, a list of authors and publicly available link for all of my papers on CVs, job and grant applications.

(4) We will evaluate the work of other scientists based exclusively on the quality of their work, not on where they have published it. We will never refer to journal titles or use journal titles as a proxy for quality when evaluating the work of other scientists in any context.

(5) We will abandon the slow, cumbersome and distorting practice of pre-publication peer review and exclusively engage in open post-publication peer review as an author and reviewer (e.g. as practiced by journals like F1000 Research, The Winnower and others, or review sites like PubPeer). 

(6) We will join with my colleagues and collectively make our stance on these issues public, and will follow this pledge without fail so that our students, postdocs and other trainees who are still building their careers do not suffer while we work to fix a broken system we have created and allowed to fester.

I am positive that IF the faculty at UCSF agreed to all these steps, science publishing would change overnight – for the better. But, alas, while I’d love to say the response was enthusiastic, it was anything but. Some polite nodding, but more the kind you give to a crazy person talking to you on the bus than one of genuine agreement. People raised specific objections (#5 was the one they are least in favor of), but seemed willing to take even a marginal risk, or to inconvenience themselves, to fix the system. And if we can’t get leadership from tenured faculty at UCSF, is it any wonder that other people in less secure positions are unwilling to do anything. I went back to Berkeley disappointed and disheartened. And then yesterday I heard a great seminar from a scientist from a major university on the East coast whose work I really love talk over and over about Nature papers in their seminar.

But my malaise was short lived. Maybe I’m crazy, but, even if we haven’t figure it out, I know there’s a way to break through the apathy. So, I’ll do the only thing I can do – commit myself to following my own manifesto. And ask as many of you who can see your way to joining me to do so publicly. If UCSF faculty don’t want to lead, we can instead.

 

Posted in EisenLab, open access | Comments closed

Wikipeevedia

A couple of weeks ago I unintentionally set off a bit of a firestorm regarding Wikipedia, Elsevier and open access. I was scanning my Twitter feed, as one does, and came upon a link to an Elsevier press release:

Elsevier access donations help Wikipedia editors improve science articles: With free access to ScienceDirect, top editors can ensure that science read by the public is accurate

I read the rest of it, and found that Elsevier and Wikipedia (through the Wikipedia Library Access Program) had struck a deal whereby 45 top (i.e. highly active) Wikipedia editors would get free access to Elsevier’s database of science papers – Science Direct – for a year, thereby “improving the encyclopedia and bringing the best quality information to the public.”

I have some substantive issues with this arrangement, as I will detail below. But what really stuck in my craw was the way that several members of the Wikipedia Library were used not just to highlight the benefits of the deal to Wikipedia and its users, but to serve as mouthpieces for misleading Elsevier PR, such as this:

Elsevier publishes some of the best science scholarship in the world, and our globally located volunteers often seek out that access but don’t have access to research libraries. Elsevier is helping us bridge that gap!

It was painful to hear people from Wikipedia suggesting that Elsevier is coming to the rescue of people who don’t have access to the scientific literature! In reality, Elsevier is one of the primary reasons they don’t have access, having fought open access tooth and nail for two decades and spent millions of dollars to lobby against almost any act anywhere that would improve public access to science. And yet here was Wikipedia – a group that IS one of the great heroes of the access revolution – publicly praising Elsevier for providing access to 0.0000006% of the world’s population.

Furthermore, I found the whole idea that this is a “donation” is ridiculous. Elsevier is giving away something that costs them nothing to provide – they just have to create 45 accounts. It’s extremely unlikely that the Wikipedia editors in question were potential subscribers to Elsevier journals or that they would pay to access individual articles. So no revenue was lost. And in exchange for giving away nothing, Elsevier almost certainly increases the number of links from Wikipedia to their papers – something of significant value to them.

I was fairly astonished to see this, and, being somewhat short-tempered, I fired off a series of tweets:

These tweets struck a bit of a nerve, and the reaction, at least temporarily, seemed to pit #openaccess advocates against Wikipedians – as highlighted in a story by Glyn Moody. I in no way meant to do this. It would be hard to find two groups whose goals are more aligned.

So I want to reiterate something I said over and over as these tweets turned into a kind of mini-controversy. In saying I thought that making this deal with Elsevier was a bad idea, I was not in any way trying to criticize Wikipedia or the people who make it work. I love Wikipedia. As a kid who spent hours and hours reading an old encyclopedia my grandparents gave me, I think that Wikipedia is one of the greatest creations of the Internet Age. Its editors and contributors, as well as Jimmy Wales and the many others who made it a reality, are absolute, unvarnished heroes.

In no way do I question the commitment of Wikipedia to open access. I just think they made a mistake here, and I worry about a bit about the impact this kind of deal will have on Wikipedia. But it is a concern born of true love for the institution.

So with that in mind, let me delve into this a bit more deeply.

First of all, I understand completely why Wikipedia make this kind of deal. The mission of Wikimedia is to “empower and engage people around the world to collect and develop educational content under a free license or in the public domain, and to disseminate it effectively and globally” [1]. But there is a major challenge to building an accurate and fully-referenced open encyclopedia: much of the source material they need to do this is either not online or is behind paywalls. It’s clear that Wikipedia sees opening source material as the long-term solution to this problem. But in the meantime they feel compelled to ensure that the people who build Wikipedia have a way around paywalls when they are doing so. It’s not all that conceptually different from a university library that works to provide access to paywalled sources to its scholars.

So the question to me isn’t whether Wikipedia should make any deals with publishers. The question is should they have made this deal with this publisher. And just like I have strongly disagreed with deals universities (including my own) routinely make to provide campus access to Elsevier journals, I do not think this deal is good for Wikipedia or the public.

Here are my concerns:

This deal will prolong the life of the paywalled business model

If the only effect of this deal was to provide editors with access, I would hold my nose and support Wikipedia’s efforts to work around the current insane scholarly publishing system. But I don’t think this is the only effect of the deal. In several ways this deal strengthens Elsevier’s subscription publishing business, and strengthening this business is clearly bad for Wikipedia and its mission.

How does it strengthen Elsevier’s business? First, it provides them with good PR – allowing them to pretend that they support openness, something that serves to at least partially blunt the increasingly bad PR their business subscription journal publishing business has incurred in recent years. Second, it provides them with revenue. This deal will increase the number of links in Wikipedia to Elsevier papers, and links on Wikipedia are clearly of great value to Elsevier – they can monetize them in multiple ways: a) by advertising on the landing pages, b) by collecting one-time fees from people without accounts who want to view an article, and, most significantly, c) by increasing traffic to their journals from users with access, which they cite to justify increased payments from universities and other institutions.

Finally, and most significantly, the deal mitigates some of the direct negative consequences of publishing paywalled journals and publishing in paywalled journals. One of the consequences of papers appearing in paywalled journals is that they are less likely to be cited and otherwise used on the Internet and beyond. And, as open resources like Wikipedia grow and grow in importance, this will become more true. This is a potentially powerful force for driving people to publish in a more open way, and, if anything, supporters of openness should be working to amplify this effect. But this deal does the opposite – it significantly dilutes the negative impacts of publishing in Elsevier’s paywalled journals, and thereby almost certainly will help prolong the life of the paywalled journal business model.

I realize that not making this deal would weaken Wikipedia in the short-run. But I am certain it would strengthen it in the long-run by quickening the arrival of a truly open scientific literature, and I think we are all in this for the long-run.

Wikipedia got too little from Elsevier

Even if you accept that this kind of deal has to be made, I think it’s a bad deal. Elsevier got great PR, significant tangible financial benefits, and several clear intangible benefits. An exchange for this, they’ve given away almost nothing. To me this was a missed opportunity related to the framing of this as a “donation”. If you’re asking for a donation, you don’t make demands. But it seems like Wikipedia was in a good position to ask for something that would benefit its readers in a much bigger way, such as Elsevier letting everyone through their paywall when following links from Wikipedia.

I obviously can’t guarantee Elsevier would have agreed to this, and maybe Wikipedia tried to negotiate for more, but it does strike me that Wikipedia undervalued itself with this arrangement.

Will this effect how articles are linked from Wikipedia?

One of the many things I love about Wikipedia is that there is a clear bias in favor of sources that are available for free online to everyone. This is obviously part philosophical – people who put the most time into building Wikipedia are obviously true believers in openness and almost certainly are biased in favor of providing open sources whenever possible. But some of this is also practical. Almost by definition if you can not access a source, you are unlikely (and should not) cite it. You can see this effect clearly in academic scientists who have only a weak bias towards citing open sources because they have access to most papers and don’t think about access when choosing what to cite. I don’t question the commitment of Wikipedians to openness. There are plenty of cases where people cite freely available versions of papers (e.g. preprints) instead of official paywalled versions. I just worry that easy access to paywalled papers will increase the number of times the paywalled version is cited in lieu of others (like free copies in PubMed Central). Obviously, there are ways to mitigate this – bots that check citations and add open ones. But it warrants watching.

And I’m not in any way suggesting that people should systematically reject citing paywalled sources. Sometimes information is fungible – there are many sources that one could cite for a particular fact – but this is obviously not always the case. Clearly for Wikipedia to be successful in the current environment, it has to be based on, and cite, a lot of paywalled sources.

Science journal articles are not like books

Several people have made the comparison between book citations and journal articles. But there are crucial differences. First, there is a real viable alternative to paywalled journals right now, and I would argue that it is in Wikipedia’s interest to support that alternative by not making things too easy for paywalled journals. Unfortunately, the same is not true for books, even academic ones. But even with the generally poor accessibility of books, I wonder if Wikipedians would support a deal with Amazon in which prolific edits got Kindle’s with free access to all Amazon e-books in exchange for providing links to Amazon when the books were cited (this was suggested by someone on Twitter but I can’t find the link)? I doubt it, yet to me this is almost exactly analogous to this Elsevier deal. In any case, the main point is that the situation with books is really bad, but that isn’t a good reason not to make the situation for journal articles better.

Wikipedia rocks

All that said, I hope this issue is behind us. It was painful to see myself being portrayed as a critic of Wikipedia. I am not. I could not love Wikipedia more than I do. I use it every day. It is one of the best advertisements for openness out there, and I can even see an argument that says that if deals with the devil make Wikipedia better, then this benefits openness far more than it hurts it. So let’s just leave it at that. I’ve enjoyed all the conversation about this issue, and I look forward to doing anything I can to make Wikipedia better and better in the future.

Posted in open access, Wikipedia | Comments closed

Thoughts on Ron Vale’s ‘Accelerating Scientific Publication in Biology’

Ron Vale has posted a really interesting piece on BioRxiv arguing for changes in scientific publishing. The piece is part data analysis, examining differences in publishing in several journals and among UCSF graduate students from 1980 to today, and part perspective, calling for the adoption of a culture of “pre-prints” in biology, and the expanded use of short-format research articles.

He starts with three observations:

  • Growth in the number of scientists has increased competition for spots in high-profile journals over time, and has led these journals to demand more and more “mature” stories from authors.
  • The increased importance of these journals in shaping careers leads authors to try to meet these demands.
  • The desire of authors to produce more mature stories has increased the time spent in graduate and postdoctoral training, and has diminished the efficacy of this training, while slowing the spread of new ideas and data.

He offers up some data to support these observations:

  • Biology papers published in Cell, Nature and JCB in 2014 had considerably more data (measured by counting the number of figure panels they have) than in 1984.
  • Over the same period, the average time to first publication for UCSF graduate students has increased from 4.7 years to 6.0 years, the number of first author papers they have has decreased, and the total time they spend in graduate school has increased.

And he concludes by offering some solutions:

  • Encourage scientists to publish all of their papers in pre-print servers.
  • Create a “key findings” form of publication that would allow for the publication of single pieces of data.

Vale has put his finger on an important problem. The process of publication has far too great an influence on the way we do science, let alone communicate it. And it would be great if we all used preprint servers and strived to publish work faster and in a less mature form than we currently do. I am very, very supportive of Vale’s quest (indeed it has been mine for the past twenty years) – if it is successful, the benefits to science and society would be immense.

However, in the spirit of the free and open discussion of ideas that Vale hopes to rekindle, I should say that I didn’t completely buy the specific arguments and conclusions of this paper.

My first issue is that the essay misdiagnoses the problem. Yes, it is bad that we require too much data in papers, and that this slows down the communication of science and the progress of people’s careers. But this is a symptom of something more fundamental – the wildly disproportionate value we place on the title of the journal in which papers are published rather than on the quality of the data or its ultimate impact.

If you fixed this deeper problem by eliminating journals entirely and moving to a system of post-publication review, it would remove the perverse incentives that produce the effects Vale describes. However Vale proposes a far more modest solution – the use of pre-print servers. The odd thing with this proposal, as Vale admits, is that pre-print servers don’t actually solve the problem of needing a lot of data to get something published. It would be great for all sorts of reasons if every paper were made freely available online as early as possible – and I strongly support the push for the use of pre-print servers. But Vale’s proposal seem to assume that existing journal hierarchy would remain in place, and that most papers would ultimately be published in a journal. And this wouldn’t fundamentally alter the set of incentives to journals and authors that has led to problems Vale writes about. To do that you have to strip journals of the power to judge who is doing well in science – not just have them render that decision after articles are posted in a pre-print server. Unless the rules of the game are changed, with hiring, funding and promotion committees looking at quality instead of citation, universal adoption of pre-print servers will both be harder to achieve, and will have a limited effect on the culture of publishing.

Indeed, I would argue that we don’t need “pre-print” servers. What we need is to treat the act of posting your paper online in some kind of centralized server as the primary act of publication. Then it can be reviewed for technical merit, interest and importance starting at the moment it is “published” and continuing for as long as people find the paper worth reading.

Giving people credit for the impact their work has over the long-term would encourage them to publish important data quickly, and to fill in the story over time, rather than wait for a single “mature” paper. Similarly, rather than somewhat artificially create a new type of paper to publish “key findings” I think people will naturally write the kind of paper Vale wants if we change the incentives around publication by destroying the whole notion of “high-impact publications” and the toxic glamour culture that surrounds it.

Another concern I have about Vale’s essay is that he bases his argument for pre-print servers on a set of data analyses that, while I found them interesting, I didn’t find them compelling. I think I get what Vale’s doing. He wants to promote the use of pre-print servers, and realizes that there is a lot of resistance. So he is trying to provide data that will convince people that there are real problems in science publishing so that they will endorse his proposals. But by basing calls for change on data, there is the real risk that other people will also find the data less than compelling and will dismiss the Vale’s proposed solutions as unnecessary as a result, when in fact the things Vale proposes would be just as valuable even if all the data trends he cites weren’t true

So let’s delve into the data a bit. First, in an effort to test the widely held sentiment that the amount of data required for a paper has increased over time, he attempted to compare the amount of data contained in papers published in Cell, Nature and JCB during the first six months of 1984 and of 2014 (it’s not clear why he chose these three journals).

The first interesting observation is that the number of biology papers published in Nature has dropped slightly over thirty years, and the number of papers published in JCB has dropped in half (presumably as the result of increased competition from other journals). To quantify the amount of data a paper contained, Vale analyzed figures in each of the papers. The total number of figures per paper was largely unchanged (a product, he argues, of journal policies), but the number of subpanels in each figure went up dramatically – two to four-fold.

I am inclined to agree with him, but it is worth noting that there are several alternative explanations for these observations.

As Vale acknowledges, practices in data presentation could have changed, with things that used to be listed as “data not shown” may now be presented in figures. I would add that maybe the increase in figure complexity reflects the fact that it is far easier to make complex figures now than it was in 1984. For example, when I did my graduate work in the early 1990’s it was very difficult to make figures showing aspects of protein structure. Now it is simple. Authors may simply be more inclined to make relatively minor points in a figure panel now because it’s easier.

A glance at any of these journals will also tell you that the complexity of figures varies a lot from field to field. Developmental biologists, for example, seem to love figures with ten or twenty subpanels. Maybe Cell, Nature and JCB are simply publishing more papers from fields where authors are inclined to use more complex figures.

Finally, the real issue Vale is addressing is not exactly the amount of data included in a paper, but rather the amount of data that had to be collected to get to the point of publishing a paper. It’s possible that authors don’t actually spend more time collecting data, but that they used to leave more data “in the drawer”.

The real point is that it’s really hard to answer the question of whether papers now contain more data than they used to. And it’s even harder to determine whether the amount of data required to get a paper published is more of less of an obstacle now than it was thirty years ago.

I understand why Vale did this analysis. His push to reform science publishing is based on a hypothesis – that the amount of data required to publish a paper has increased over time – and, as a good scientist, he didn’t want to leave this hypothesis untested. However, I would argue that differences between 1984 and today are irrelevant. Making it easier to publish work, and giving people incentives to publish their ideas and data earlier, is simply a good idea – and would be equally good even if papers published in 1984 required more data than they do today.

Vale goes on to speculate about why papers today require more data, and chalks it up primarily to the increased size of the biomedical research community, which has increased competition for coveted slots in high-ranking journals while it has also increased the desire for such publications, and that this has allowed journals to be even more selective and to put more demands on authors. (It’s really quite interesting that the number of papers in Cell, Nature and (I assume) Science has not increased in 30 years even as the community has grown).

This certainly seems plausible, but I wonder if it’s really true. I wonder if, instead, the increase in expectations of “mature” work have to do with the maturation of the fields in question. Nature has pretty broad coverage in biology (although it’s coverage is by no means uniform), but Cell and JCB both represent fields (molecular biology and cell biology) that were kind of in their infancies, or at least early adolescences, 30 years ago. And as fields mature, it seems quite natural for papers to include more data, and for journals to have higher expectations for what constitutes an important advance. You can see this happening over much shorter timeframes. Papers on the microbiome for example used to contain very little experimental data – often a few observations about the microbial diversity of some niche – but within just a few years, expectations for papers in the field have changed, with the papers getting far more data-dense. It would be interesting to repeat the kind of analysis Vale did, but to try and identify “new” fields (whatever that means), and see whether fields that were “new” in 2014 have papers of similar complexity to “new” fields in 1984.

The second bit of data Vale produced is on the relationship between publications and the amount of time spent in graduate school. Using data from UCSF’s graduate program, he found that current graduate students “published fewer first/second author papers and published much less frequently in the three most prestigious journals.” The average time to a first author papers for UCSF students in the 80’s was 4.7 years, and now it is 6.0. And the number of students with Science, Nature or Cell papers has fallen in half.

Again, one could pick this analysis apart a bit. Even if you accept the bogus notion that SNC publications are some kind of measure of quality, there are more graduate students both in the US and elsewhere, but the number of slots in those journals has remained steady. Even if criteria for publication were unchanged over time, one would have expected the number of SNC papers for UCSF graduate students to have gone down simply because of increased competition. If SNC papers are what these students aspire to (which is probably sadly largely true) then it makes sense that they would spend more time trying to make better papers that will get into these journals. It’s not clear to me that this requires that papers have more data, but rather than they have better data. But either way, once could look at this and argue that the problem isn’t that we need new ways of publishing, but rather that we need to stop encouraging students to put their papers into SNC. I suspect that all of the trends Vale measures here would be reversed if UCSF faculty encouraged all of their graduate students to publish all of their papers in PLOS ONE.

One could also argue that the trends reflect not a shift in publishing, but rather a degradation in the way we train graduate students. In my experience most graduate student papers reflect data that was collected in the year preceding publication. Maybe UCSF faculty, distracted perhaps by grant writing, aren’t getting students to the point where they do the important, incisive experiments that lead to publication until their fifth year, instead of their fourth.

And again, while the increased time to first publication has increased dramatically in the last 30 years, it’s hard to point to 1984 as some kind of Golden Age. That typical students back then weren’t publishing at all until the end of their fifth year in graduate school is still bad.

So, in conclusion, I think there is a lot to like in this essay. Without explicitly making this point, the observations, data and discussion Vale present make a compelling case that publishing is having a negative impact on the way we do science and the way we train the next generation. I have some issues with the way he has framed the argument and the degree of conservativeness in his solutions. But I think Vale has made an important contribution to the now decades old fight to reform science publishing, and we would all be better off if we heeded his advice.

 

Posted in open access, publishing, science | Comments closed

Sympathy for the Devil?

My Facebook feed is awash with people standing up for Tim Hunt: “The witch hunt against Tim Hunt is unbearable and disgraceful”, “This is how stupidity turns into big damage. Bad bad bad”, “Regarding the Tim Hunt hysteria”, and so on. Each of these posts has prompted a debate between people who think a social media mob has unfairly brought a good man down, and people like me who think that the response has been both measured and appropriate.

I happened to met Tim Hunt earlier this year at a meeting of young Indian investigators held in Kashmir. We both were invited as external “advisors” brought in to provide wisdom to scientists beginning their independent careers. While his “How to win a Nobel Prize” keynote had a bit more than the usual amount of narcissism, he was in every other way the warm, generous and affable person that his defenders of the last week have said he is. I will confess I kind of liked the guy.

But it is not my personal brush with Hunt that has had me thinking about this meeting the past few days. Rather it is a session towards the end of the meeting held to allow women to discuss the challenges they have faced building their scientific careers in India. During this session (in which I was seated next to Hunt) several brave young women stood up in front of a room of senior Indian and international scientists and recounted the specific ways in which their careers have been held back because of their gender.

The stories they told were horrible, and it was clear from the reaction of women in the room that these were not isolated incidents. If any of the scientists in positions of power in the room (including Hunt) were not already aware of the harassment many women in science face, and the myriad obstacles that can prevent them from achieving a high level of success, there is no way that could have emerged not understanding.

When I am thinking about what happened here, I am not thinking about how Twitter hordes brought down a good man because he had a bad day. I am instead thinking about what it says to the women in that room in Kashmir that this leading man of science – who it was clear everybody at the meeting revered – had listened to their stories and absorbed nothing. It is unconscionable that, barely a month after listening to a women moved to tears as she recounted a sexual assault from a senior colleague and how hard it was for her to regain her career, Hunt would choose to mock women in science as teary love interests.

Hunt’s words, and even more so his response to being called out for them, suggest that he does not understand the damage his words caused. I will take him at his word that he did not mean to cause harm. But the fact that he did not realize that those words would cause harm is worse even than the words themselves. That a person as smart as Hunt could go his entire career without realizing that a Nobel Prizewinner deriding women – even in a joking way – is bad just serves to show how far we have to go.

So, you’ll have to forgive me for recoiling when people ask me to measure my words based on the effect they will have on Hunt. I understand all too well the effects that criticism can have on people. But silence also has its consequences. And we see around us the consequences of decades of silence and inaction on sexism in science. If the price of standing up to that history is that Tim Hunt has to weather a few bad weeks, well so be it.

Posted in science, women in science | Comments closed

Elsevier admits they’re a major obstacle for women scientists in the developing world

I just received the following announcement from Elsevier:

Nominations opened today for the Elsevier Foundation Awards for Early-Career Women Scientists in the Developing World, a high-profile honor for scientific and career achievements by women from developing countries in five regions: Latin America and theCaribbean; the Arab region; Sub-Saharan Africa; Central and South Asia; and East and South-East Asia and the Pacific. In 2016 the awards will be in the biological sciences, covering agriculture, biology, and medicine. Nominations will be accepted through September 1, 2015.

Sounds great. But listen to what they get.

The five winners will each receive a cash prize of US$5,000 and all-expenses paid attendance at the AAAS meeting. The winners will also receive one-year access to Elsevier’s ScienceDirect and Scopus.

Could there be a more obvious admission that Elsevier’s own policies – indeed their very existence – is a major obstacle to the progress of women scientists in the developing world? How can anyone write this and not have their head explode?

Posted in Uncategorized | Comments closed

Pachter’s P-value Prize’s Post-Publication Peer-review Paradigm

Several weeks ago my Berkeley colleague Lior Pachter posted a challenge on his blog offering a prize for computing a p-value for a claim made in a 2004 Nature paper. While cheeky in its formulation, Pachter had an important point – he believed that a claim from this paper was based on faulty reasoning, and the p-value prize was a way of highlighting its deficiencies.

Although you might not expect the statistics behind a largely-forgotten claim from an 11 year old paper to attract significant attention, Pachter’s post has set of a remarkable discussion, with some 130 comments as of this writing, making it an incredibly interesting experiment in post-publication peer review. If you have time, you should read the post and the comments. They are many things, but above all they are educational – I learned more about how to analyze this kind of data, and about how people think about this kind of data, here than I have anywhere else.

And, as someone who believes that all peer review should be done post-publication, there’s also a lot we can learn from what’s happening on Pachter’s blog.

Pre vs Post Publication Peer Review

I would love to see the original reviews of this paper from Nature (maybe Manolis or Eric can post them), but it’s pretty clear that the 2 or 3 people who reviewed the paper either didn’t scrutinize the claim that is the subject of Pachter’s post, or they failed to recognize its flaws. In either case, the fact that such a claim got published in such a supposedly high-quality journal highlights one of the biggest lies in contemporary science: that pre-publication peer review serves to defend us from the publication of bad data, poor reasoning and incorrect statements.

After all, it’s not like this is an isolated example. One of the reasons that this post generated so much activity was that it touched a raw nerve among people in the evolutionary biology community who see this kind of thing – poor reasoning leading to exaggerated or incorrect claims – routinely in the scientific literature, including (or especially) in the journals that supposedly represent the best of the best in contemporary science (Science, for example, has had a string of high-profile papers that turned out to be completely bogus in recent years – c.f. arsenic DNA).

When discussing these failures, it’s common to blame the reviewers and editors. But they are far less the fault of the people involved than they are an intrinsic problem with pre-publication. Pre-publication review is carried out under severe time pressure by whomever the editors managed to get to agree to review the paper – and this is rarely the people who are most interested in the paper or the most-qualified to review it. Furthermore, journals like Nature, while surely interested in the accuracy of the science they publish, also ask reviewers to assess its significance, something that at best distracts from assessing the rigor of a work, and often is in conflict with it. Most reviewers take their job very seriously, bit it simply impossible for 2 or 3 somewhat randomly chosen people who read a paper at a fixed point in time and think about it for a few hours to identify and correct all of its flaws.

However – and this is the crux of the matter for me – despite the fact that pre-publication peer review simply can not live up to the task it is assigned, we pretend that it does. We not only promulgate the lie to the press and public that “peer reviewed” means “accurate and reliable”, we act like it is true ourselves. Despite the fact that an important claim in this paper is – as the discussion on the blog has pointed out – clearly wrong, there is no effective way to make this known to readers of the paper, who are unlikely to stumble across Pachter’s blog while reading Nature (although I posted a link to the discussion on PubMed Commons, which people will see if they find the paper when searching in PubMed). Worse, even though the analyses presented on the blog call into question one of the headline claims that got the paper into Nature in the first place, the paper will remain a Nature paper forever – its significance on the authors CVs unaffected by this reanalysis.

Imagine if there had been a more robust system for and tradition of post-publication peer review at the time this paper was published. Many people (including one of my graduate students) saw the flaws in this analysis immediately, and sent comments to Nature – the only visible form of post-publication review at the time. But they weren’t published, and concerns about this analysis would not be resurfaced for over a decade.

The comments on the blog are not trivial to digest. There are many threads, and the comments include those that are thorough and insightful with others that are jejune and puerile. But if you read even part of the thread you come away with a far deeper understanding of the paper, what it found and what aspects of it are right and wrong than you get from the paper itself. THIS is what peer review should look like – people who have chosen to read a paper spending time not only to record their impressions once, but to discuss it with a collection of equally interested colleagues to try and arrive at a better understanding of the truth.

The system is far from perfect. but from now on anytime I’m asked what I mean by post-publication peer review, I’ll point them to Lior’s blog.

One important question is why doesn’t this happen more often? A lot of people had clearly formed strong opinions about the Lander and Kellis paper long before Lior’s post went up. But they hadn’t shared them. Does someone have to write a pointed blog post every time they want to inspire its results to be reexamined by the community?

The problem is, obviously, that we simply don’t have a culture of doing this kind of thing. We all read papers all the time, but rarely share our thoughts with anyone outside of our immediate scientific world. Part of this is technological – there really isn’t a simple system tied to the literature on which we can all post comments on papers that we have read with the hope that someone else will see them. PubMed Commons is trying to do this, but not everyone has access. And other than they the systems are just not that good yet. But this will change. The bigger challenge is getting people to use it once good technology for post-publication peer review.

Developing a culture of post publication peer review

The biggest challenge is that this kind of reanalysis of published work just isn’t done – there simply is not a culture of post-publication peer review. We lack any incentives to push people to review papers when they read them and have opinions that they feel are worth sharing. Indeed, we have a variety of counterincentives. A lot of people ask me if Lior is nuts for criticizing other people’s work so publicly. To many scientists this “just isn’t done”. But the question we should be asking is not “Why does Lior do this?” but rather “Why don’t we all?”.

When we read a paper and recognize something bad or good about it, we should look at it as a duty to share it with our colleagues. This is what science is all about. Oddly, we feel responsible enough for the integrity of the scientific literature that we are willing to review papers that often do not interest us and which we would not have otherwise read, yet we don’t feel that way about the more important process of thinking about papers after they are published. Somehow we have to transfer this sense of responsibility from pre- to post- publication review.

An important aspect of this is credit. A good review is a creative intellectual work and should be treated as such. If people got some kind of credit for post-publication reviews, more people would be inclined to do them. There are lots of ideas out there for how to create currencies for comment, but I don’t really think this is something that can be easily engineered – it’s going to have to evolve organically as (I hope) more people engage in this kind of commentary. But it is worth noting that Lior has, arguably, achieved more notice for his blog, which is primarily a series of post-publication reviews, than he has for his science. Obviously this is not immediately convertible classical academic credit, but establishing a widespread reputation for the specific kind of intellectualism manifested on his blog, can not but help Lior’s academic standing. I hope that his blog inspires people to do the same.

Of course not everybody is a fan of Lior’s blog. Several people who I deeply respect have complained that his posts are too personal, and that they inspire a kind of mob mentality in comments in which the scientists whose work he writes about become targets. I don’t agree with the first concern, but do think there’s something to the second.

So long as we personalize our scientific achievements, attacks on them are going to feel personal. I know that every time I receive a negative review of a paper or grant, I feel like it is a personal attack. Of course I know that this generally isn’t true, and I subscribe to belief that the greatest respect you can show another scientist is to tell them when you think they’ve made a mistake or done something stupid. But, nonetheless, negative feedback still feels personal. And it inspires in most of us an instinctive desire to defend our work – and therefore our selves – from these “attacks”. I think the reason people feel like Lior’s blogs are attacks is that they put themselves into the shoes of the authors he is criticizing and feel attacked. But I think this is something we have to get over as scientists. If the critique is wrong, than by all means we should defend ourselves, but conversely we should be able to admit when we were wrong, to have a good discussion about what to do next, and move on, all the wiser for it.

However, as much as I would like us all to be thick skinned scholars about to take it and dish it out, reality is that this is not the case. Even when the comments are civil, I can see how having a few dozen people shredding your work publicly could make even the most thick skinned scientist feel like shit. And if the authors of the paper had not been famous, tenured scientist at MIT, the fear of negative ramifications from such a discussion could be terrifying. I don’t think this concern should lead to people feeling reluctant to jump into scientific discussions – even when they are critical of a particular work – but I do think we should exercise extreme care in how we say things. And rule #1 has to be to restrict comments to the science and not the authors. In this regard, I was probably one of the worse offenders in this case – jumping from a criticism of the analysis to a criticism of the authors’ response to the critique. I know them both personally and felt they would know my comments were in the spirit of advancing the conversation, but that’s not a good excuse. I will be very careful not to do that in the future under any circumstances.

Posted in Uncategorized | Comments closed

The inevitable failure of parasitic green open access

At the now famous 2001 meeting that led to the Budapest Open Access Initiative – the first time the many different groups pushing to make scholarly literature freely available assembled – a serious rift emerged that almost shattered the open access movement in its infancy.

On one side were people like me (representing the nascent Public Library of Science) and Jan Velterop (BioMed Central) advocating for “gold” open access, in which publishers are paid up-front to make articles freely available. On the other side was Stevan Harnad, a staunch advocate for “green” open access, in which authors publish their work in subscription journals, but make them freely available through institutional or field specific repositories.

On the surface of it, it’s not clear why these two paths to OA should be in opposition. Indeed, as a great believer in anything that would both make works freely available, I had always liked the idea of authors who had published in subscription journals making their works available, in the process annoying subscription publishers (always a good thing) and hastening the demise of their outdated business model. I agreed with Stevan’s entreaty that creating a new business model was hard, but posting articles online was easy.

But at the Budapest meeting I learned several interesting things. First, Harnad and other supporters of green OA did not appear to view it as a disruptive force – rather they envisioned a kind of stable alliance between subscription publishers and institutional repositories whereby authors sent papers to whatever journal they wanted to and turned around and made them freely available. And second, big publishers like Elsevier were supportive of green OA.

At first this seemed inexplicable to me – why would publishers not only allow but encourage authors to post paywalled content on their institutional repositories? But it didn’t take long to see the logic. Subscription publishers correctly saw the push for better access to published papers as a challenge to their dominance of the industry, and sought ways to diffuse this pressure. With few functioning institutional repositories in existence, and only a small handful of authors interested in posting to them, green OA was not any kind of threat. But it seemed equally clear that, should green OA ever actually become a threat to subscription publishers, their support would be sure to evaporate.

Unfortunately, Harnad didn’t see it this way. He felt that publishers like Elsevier were “on the side of the angels”, and he reserved his criticism for PLOS and BMC as purveyors of “fools gold” who were delaying open access by seeking to build a new business model and get authors to change their publishing practices instead of encouraging them to take the easy path of publishing wherever they want and making works freely available in institutional repositories.

At several points the discussions got very testy but we managed to come to make a kind of peace, agreeing to advocate and pursue both paths. PLOS, BMC and now many others have created successful businesses based on APCs that are growing and making an increasing fraction of the newly published literature immediately freely available. Meanwhile, the green OA path has thrived as well, with policies from governments and universities across the world focusing on making works published in subscription journals freely available.

But the fundamental logical flaw with green OA never went away. It should always have been clear that the second Elsevier saw green OA as an actual threat, they would no longer side with the angels. And that day has come.

With little fanfare, Elsevier recently updated their green OA policies. Where they once encouraged authors to make their works immediately freely available in institutional repositories, they now require an embargo before these works are made available in an institutional repository.

This should surprise nobody. It’s a testament to Stevan and everyone else who have made institutional repositories a growing source of open access articles. But given their success, it would be completely irrational of Elsevier to continue allowing their works to appear in these IRs at the time of publication. With every growing threats to library budgets, it was only a matter of time before universities used the available of Elsevier works in IRs as a reason to cut subscriptions, or at least negotiate better deals for access. And that is something Elsevier could not allow.

Of course this just proves that, despite pretending for a decade that they supported the rights of authors to share their works, they never actually meant it. There is simply no way to run a subscription publishing business where everything you publish is freely available.

I hope IRs will continue to grow and thrive. Stevan and other green OA advocates have always been right that the fastest – and in many ways best – way for authors to provide open access is simply to put their papers online. But we can longer pretend that such a model can coexist with subscription publishing. The only long-term way to support green OA and institutional repositories is not to benignly parasitize subscription journals – it is to kill them.

Posted in open access | Comments closed

Ending gender-based harassment in peer review

A few days ago Fiona Ingleby, a postdoctoral fellow at the University of Sussex (she’s an evolutionary biologist who works on sex-specific behavior and other phenotypes in Drosophila) sent out a series of Tweets reporting on a horrifically sexist review she had received after submitting a paper to PLOS ONE. 

There is so much horrible and wrong here, it’s hard to know where to begin. It is completely reprehensible that anyone would think this, let alone write it; that someone would think it was OK to submit a formal review of a paper that said “get a male co-author”; that they would chastise someone for supposed biases without seeing their own glaring ones; that the editor asleep on the job and didn’t look at the review before sending it out or, worse, read the review and thought it wasn’t problematic; that the editor was willing to reject a paper based on an obviously biased review; that the editor didn’t realize that one of their most important roles is to make sure that reviews like this never get sent out or factored into publishing decisions; that PLOS not only allowed this happen but didn’t respond to the authors’ complaint until they took to Twitter several weeks later.

(Let me just disclose for anyone reading this who doesn’t know – I am a founder of PLOS and am on its Board of Directors.  I’ll probably get chastised for commenting publicly on this, but I think it’s important to not just subject PLOS to the same scrutiny and criticism I would bring to the table if it were some other publisher, but to hold PLOS to an even higher standard. This should not have happened, and PLOS needs to not only learn from this, but fix things so that it never happens again. Also, I should add that I have no inside information about this case – I know nothing about it except what has been written about publicly.)

I wish I could say that this review was shocking. But sadly it’s not. As anyone who is paying even the slightest bit of attention should know, science has a serious sexism problem. These kinds of attitudes remain commonplace, and impact women at all stages of their careers in myriad ways. And so it defies credulity to think this is an isolated incident in publishing – if one review like this got through, one has to assume many more like it have been and will be written (indeed they have been) and so we not only have to respond to this event, but we have to do whatever it takes to stop it from ever happening again.

Furthermore, I’ve seen all manner of profanity applied to the review and reviewer – all deserved – for their awful sexist attitudes and acts. But it’s critical that we not dismiss this as just an asshole being an asshole. This happened in a professional setting and clearly targeted the gender of the authors in a way that was not only inappropriate, but which would have had a negative effect on their careers by denying them publication and appropriate credit for their work. So let’s call this what it is – an unambiguous case of harassment.

So what do we do about this? Obviously gender-based harassment happens all over the place. But this particular case happened in the context of science publishing, and PLOS in particular, and I am writing this to ask for help in thinking about what PLOS should do to prevent this from happening (and just to be clear – I don’t run PLOS – but I will do everything I can to make sure all good ideas get implemented).

How do we respond to this reviewer and any future reviewer who engages in harassment in their review?

Once the case became public, PLOS quickly removed the reviewer from its reviewer database, and presumably they will never be asked to review for PLOS again. (I’m still not 100% sure exactly what this means – it seems like we need to do more than remove them from the database – they need to be blacklisted in some manner so that they are never asked to review for PLOS again).

This is obviously a necessary response. But it is also insufficient. First of all, it’s  pretty light punishment – it’s not like people are clamoring to review for PLOS (or any other publisher for that matter). But more importantly, PLOS is but one of many publishers, and accounts for only a few percent of all published papers. This reviewer is still in a position to review for the thousands of other publishers on the planet, so not very much has been accomplished with this action. One can hope the reviewer has learned something from the public discussion of their review, but we certainly can not count on that. So something else needs to be done.

Which bring us to a sticky issue. To do anything more than PLOS has already done would require revealing the reviewers identity either publicly, or at least to the publishers of other journals for which they are likely to review – and reviewers agree to review with the clear expectation that their identity will be kept secret unless they choose to reveal it. While publishers clearly have a duty to protect the anonymity of their reviewers, they also have a responsibility to protect people from harassment. And in this case these two are in conflict. My first instinct is to say, “You do something like this, you lose the right to hide behind the veil of anonymity”, but it’s not as clearcut as I’d like it to be.

It’s no secret to people who read this blog that I have long been against anonymous peer review. But I do recognize that it has a real value, especially to people who are at vulnerable stages in their careers and would not feel comfortable giving their honest opinions if they had to attach their identity to it. In the long run I think we can change the culture of science so they wouldn’t feel that way, but that’s a separate issue. The fact is that right now reviewer anonymity is the norm, and I think it would make a lot of people nervous if publishers granted themselves the right to reveal reviewer identities.

But surely, publishers would reveal reviewer identities in some situations – say if a reviewer physically threatened an author or engaged in some other frankly illegal activity in their review. So clearly anonymity is not inviolable, and the question is whether sexist and harassing reviews raise to the level where the publisher’s interest in protecting others from abuse trumps its interest in preserving reviewer anonymity. I think it does, and furthermore feel it’s a cop out on the part of publishers to hide behind review anonymity here. Engaging in harassing behavior in peer review should void your guarantee of anonymity, full stop.

Obviously, one superficial way to resolve this conflict is to intercept all harassing reviews and make sure they never are seen by the authors – a sort of “no harm, no foul” response. But while this protects the authors from the proximal harms of a biased and sexist review, it doesn’t deal with the harasser. The responsibility of the journal to prevent others from being harassed shouldn’t change because their behavior was caught early.

There are serious challenges in implementing something like this – for example, who would make the decision that something is harassment? – but I am confident we can figure them out. One thing that all publishers can do is to spell out very clearly the kinds of behavior that are unacceptable and what the consequences are for engaging in them. It seems like you shouldn’t have to say “don’t harass people”, but clearly you do. And having very clear policies would likely both help prevent harassment and make it easier to deal with harassers. When this case first came to my attention, I looked around to see if PLOS has some kind of “code of conduct” policy for reviewers, but I couldn’t find one. Maybe I missed it, but if so, then it’s likely not being seen by reviewers. I thought I might find them at the Committee on Publication Ethics, but their code of conduct policy doesn’t seem to deal with this either. Does anyone know of such a policy? I was at a meeting last month sponsored by India Bioscience – the program guide has a great “Code of Conduct” for meeting attendees – this would be a good place to start. [UPDATE: A comment from Irene Hames pointed me to this “Ethical Guidelines for Peer Reviewers” from COPE].

I’m very curious what other people think about this, especially because I’m a bit concerned that my overall feeling that anonymous peer review is bad is coloring my judgment here. But seriously, what could be more important for a publisher to do than protect their authors from harassment? If they’re not willing to do whatever that takes, they should just close up shop.

The role of editors in preventing harassment

It’s hard to fathom how a review as blatantly sexist and harassing as this one was not only sent back to the authors, but used as the sole basis for a negative publication decision on the submission. There are really only two possibilities – neither of them good: the academic editor handling the manuscript failed to fully read the review, or they read it and didn’t find its contents objectionable. So either the editor doesn’t take their job seriously or they are complicit to harassment. Whatever the answer, they shouldn’t be handling manuscripts, and PLOS has asked them to resign their position (and, presumably, will not send them any more manuscripts even if they don’t formally resign).

This editor (again, I don’t know their identity, or anything about their past performance for PLOS) was one of approximately 7,000 academic editors who handle manuscripts for PLOS ONE. The vast majority of the people who edit and review for PLOS take their work seriously and are constructive in their reviews. However, with that many editors it’s inevitable that some are going to do their job poorly. But we can’t just write this off as a bad editor. PLOS has intentionally (and for good reasons) devolved a lot of autonomy to its editors. But in doing so it has magnified the effect that a bad or negligent editor can have, and this increases the need for PLOS to train its editors well, to oversee their work carefully, and to respond rapidly when problems arise – all of which PLOS failed on here.

One issue has to do with the way that editors conceive of their job. It’s always seemed to me that many academic editors think that their primary responsibility is to identify reviewers and then to render decisions on papers after reviews are in. They recognize that they sometimes have to adjudicate between reviewers with different opinions – making them a kind of super reviewer. But I seldomly hear academic editors talk about another – arguably more important – aspect of their job, which is to protect authors from lazy, capricious or hostile reviewers. In my experience most editors almost always pass on reviews to authors even if they disagree with them or think they were inadequate – it’s somehow felt to be bad form to have asked for a review to then turn around and not use it. This needs to change. I would argue that protecting authors from reviewer malfeasance or malignancy is the most important role for editors in our current publishing system. Maybe PLOS and other journals already do this, but every academic editor should be trained to recognize and deal with the various types of harassment and other bad reviewer behaviors that we know exist.

But training can only go so far, and we have to assume that there is going to be considerable variance in the manner in which editors work and that some fraction of papers will be handled poorly, especially for a journal like PLOS ONE where a large number of the editors are young and relatively inexperienced. PLOS knows this, of course, and has long wrestled both with how to get more consistent behavior out of its editors and to deal with problems when they arise. There are two general possibilities: there could be a second layers of more experienced editors or staffers who review every decision letter for its adherence to PLOS’s editorial standards and code of conduct before it goes out, or PLOS could assume that most decisions are good and rely on feedback from authors (aka complaints) to identify problems.

You can understand why PLOS ONE and most other journals that already rely heavily on academic editors generally choose the later solution – it’s hard enough to find people to handle manuscripts – adding a second layer of review would slow things down even further and make them more expensive. But if you’re going to use this strategy, then it seems like it’s imperative that you respond to issues – especially serious ones – quickly. And PLOS failed to do this – the authors say they had been waiting for almost a month for PLOS to respond to their complaint about how their manuscript was handled.

PLOS really has to fix this. But I also think they should consider what it would take to have every decision letter screened before sending it out to authors. This would not only go a long way towards preventing harassment in the review process, but also ensuring that the whole process is more fair (I’ve fielded a fair number of complaints about the failure of editors to properly implement PLOS ONE’s editorial policies – one decision letter I saw described a paper as “technically sound, but not of sufficient interest to merit publication in PLOS ONE” – a clear contradiction of PLOS ONE‘s standards for inclusion).

How much would this cost? Seems like you could hire someone who looks at 2-3 decision letters an hour, so lets say 20 a day, or 5,000 a year. Even if you pay this person a very good salary, you’re only talking $20-$25/article to make sure people aren’t being harassed and are otherwise being treated fairly. Considering that we spend around $6,500/published article on average across the industry, this seems like a pittance.

Protecting authors in an open review/post-publication review world

I’ve written a lot about why I think the whole system of pre-publication peer review that dominated science publishing needs to be replaced with a system where papers are published whenever authors feel they are ready, and peer review happens post-publication and is not limited to 2 or 3 handpicked reviewers. I’m not going to rehash why I think this system is better – you can read my arguments here and here. PLOS will begin the first stages of this transition soon. More open peer review will discourage some of the bad behavior that takes place when reviewers are anonymous. Taking away the power individual reviewers currently have to influence the fate of a paper and thus the careers of its authors should make review more fair. However, protections the formal structure of peer review affords authors from bad reviewer behavior could easily be undermined if we try to rely too heavily on the wisdom of the crowd to police peer review.The sexist attitudes that reared their ugly head in this case are not going to go away because we change the way peer review works. So it’s very important that, in trying to fix other aspects of science publishing, we don’t end up increasing authors exposure to abuse. In this world I think the things discussed above – very clear codes of conduct for reviewers, and proactive policing of reviews – become even more important. And while I’ve been convinced that it’s important to allow reviewers to be unnamed to authors and readers, it’s imperative that they not be truly anonymous – somebody (publisher, scientific society, etc…) has to know who reviewers are so that harassment and other abusive behaviors can be discouraged and dealt with appropriately when they occur.

Please let me know what you think about these issues. I’m sure others have better ideas than I do about how to prevent and deal with harassment in science publishing today and in the future.


 

UPDATE: Several people on Twitter have noted that the term “sexual harassment” is specific to cases involving unwanted sexual advances. The terms “sexist” and “gender bias” were suggested by some, but I don’t think that captures the egregiousness of the offense, so I changed the title and text to “gender-based harassment”, which I think is more appropriate.

Posted in science | Tagged , , , | Comments closed

PLOS is anti-elitist! PLOS is elitist! The weird world of open access journalism.

In 2005 I submitted an essay about science publishing to a political magazine. I got a polite reply back saying that the article was interesting and the issue important but that my approach wasn’t right for them. My piece was too straightforward. Too persuasive. They preferred articles that had a simple “hook” and, most importantly, were “counterintuitive”.

Zoom forward a decade and I finally get what they were looking for. In the last few months two articles about open access have appeared in political magazines, both having “counterintuitive” points.

The first,  “The Duck Penis Paradox: Is too much Internet pop science drowning out the serious stuff?” by Alice Robb appeared in September in The New Republic. I spoke to Robb extensively as she worked on the article (although I got labeled “voluble” for my efforts), and as I started to read it, I was reasonably pleased. Although she was a bit flippant, Robb did a credible job of describing the motivation behind PLOS ONE and our rise in the publishing world.

But then she got to her “counterintuitive” point:

So, in many ways, Eisen has won. More people have more access to more studies than ever before. Science has never been so democratic. It’s just not clear whether democracy is what science needs.

Robb goes on, describing how actually reading about the variety of science people are doing gave her a headache, and laments the potential loss of filters:

The traditional journals may be inefficient, but they serve a purpose. By establishing a hierarchy, they help direct scientists’ and journalists’ limited attention to the research that deserves it.

So, basically, Robb was complaining that PLOS is bad because it is anti-elitist – that we may not like elitist journals, but we NEED them, lest we leave poor science journalists dangling in the wind, forced to actually read papers and figure out what’s interesting on their own.

Nevermind, that said meritocracy is demonstrably flawed. Nevermind that the current system of peer review sucks at identifying good quality and important science. Nevermind that anyone who pays attention to science – and Science – should know “high quality” journals routinely publish crap. After researching the issue, Robb concluded that even a dysfunctional elitist hierarchy is better than no elitist hierarchy.

In retrospect, this should not have surprised me. For as long as I can remember – and long before that too – The New Republic has been a great defender of our current “meritocracy” in all areas of life. So why should it be a surprise that they view efforts to democratize science as a bad thing.

Robb’s piece of reminiscent of an editorial that appeared in The Harvard Crimson shortly after PLOS ONE was launched:

Getting into Harvard is hard, very hard. Yearly the gatekeepers in Byerly Hall vet thousands of applicants on their merits, rejecting many times the number of students that they accept. But getting a scientific paper published in Science or Nature, today’s pre-eminent scientific journals, is oftentimes harder.

Science, like much of academia, has its own admissions committee. Though over a million manuscripts are published in journals yearly, many more are submitted and rejected. The gatekeepers of science—peer reviewers who are reputable scientists and well versed in a particular field—advise journal editors whether to reject a manuscript outright, send it back for revisions, or publish it.

Without a peer review process to separate the revolutionary papers from the merely good from the rubbish, scientists will have no way of knowing which discoveries and experiments merit their time and interest. Instead, they will spend inordinate amounts of time wading through the quicksand of junk science to get to truly interesting work. Peer reviewers are chosen as peer reviewers for a reason—unlike the hoi polloi that roam the Internet, they have the knowledge and experience to judge scientific research on its merits.

I responded at the time:

As a Harvard graduate and co-founder of the Public Library of Science (PLoS), I was appalled by your editorial, “Keep Science in Print” in which you condemn our new journal PLoS One. The article is too ill-informed and riddled with factual inaccuracies to be taken seriously as an attack on our efforts to rejuvenate peer review by opening up the process to all members of the scientific community. I would normally feel compelled to correct all these errors, but fortunately I don’t have to. Perhaps sensing the opportunity for delicious irony, the “hoi polloi that roam the Internet” have identified and corrected your mistakes in the open commentary you provided for this article.

They did not, however, respond to your repellent effort to rally the forces of elitism to derail a project whose primary aim is to rapidly bring scientific knowledge to everyone. Elite scientific journals are, you argue, like the Harvard admissions committee—carefully separating revolutionary papers from the merely good, just as Byerly Hall culls the unworthy from the ranks of each year’s freshman class. I couldn’t agree more. The two are very similar—and both are deeply flawed. It is impossible for even the smartest scientists to recognize the true merit of a paper before it is published, just as it is impossible to identify the smartest and most talented scholars on the basis of their high school grades and SAT scores.

Think, if you will, of PLoS One as a large public university—our doors are open to papers that might not earn admission to Science or Nature. But, over time, many of these papers will turn out to be outstanding. Once they see PLoS One, we are confident that consumers of scientific papers will discover what employers have long ago: If you’re looking for the imprimatur of greatness, try Nature or Harvard—but if you want the real thing, try PLoS One or Berkeley.

Although I am disappointed that the conversation about PLOS ONE hasn’t really changed in a decade, both The Crimson and TNR were right in calling PLOS ONE an attack on elitism in science. We just differ in whether we think that’s a good thing.

With this critique of PLOS in mind, it was surprising to read an article published earlier this week, “Free Access to Science Research Doesn’t Benefit Everyone” by Rose Eveleth that comes at open access (and open science in general) with a different “counterintuitive” point. She too starts off with a generally favorable outlook on openness, but quickly comes to a different conclusion: that PLOS is TOO elitist:

Making something open isn’t a simple check box or button—it takes work, money, and time. Often those pushing for open access aren’t the ones who will have to implement it. And for those building their careers, and particularly for underrepresented groups who already face barriers in academia, being open isn’t necessarily the right choice.

Melissa Bates, a physiology researcher at the University of Iowa, says that when it comes to making papers open access, it’s not fair to ask graduate students and early career scientists to bear the brunt of the responsibility. “There’s this idea that open access is this ethical and moral thing, that it’s a morally and ethically grounded movement, and I can appreciate in a sense that it is,” she said. “But there’s also a business model to how science is done.”

That business model isn’t all that different in science publishing than it is in any other kind of print publishing. Putting out a journal costs money. And someone, whether it’s the university, the scientists, the government, the public, or some benevolent billionaire, has to pay for it. Much scientific research is funded by taxpayers. But the editorial process—the printing, the hosting, and the rest of it—is not. “In principle, Open Access is what I call doing the right thing,” said Alan Leshner, the executive publisher of Science, a journal the keeps its papers closed for the first year after they’re published, and then opens them up to the public. “It would be great if we could afford open access to everything we publish immediately. The problem is it costs $50 million a year to publish Science.” Somebody has to foot that bill, he says.

When a paper is accepted to a journal that isn’t automatically open access, in some cases scientists can pay a certain amount of money to release it to the world. Those publishing fees can be thousands of dollars for each paper. Open-access advocates argue that it’s worth the money to put the work out there, but Bates points out that often grants will have a limit to how much someone can spend on publishing fees. Gezelter says that that economic tension is a big one in labs. “Would you rather publish these 10 papers open access or would you rather hire a grad student for a year?” he asks. “It leaves individual scientists in an ethical quandary,” Bates said. “The answer for me is always going to be: I’m going to pay a person.”

There’s a lot to unpack here, but Eveleth’s basic argument is that open access is a high-minded ideal being pushed on young scientists by an elite who don’t understand, or don’t appreciate, the challenges of doing science in the modern world. I agree completely that the system as a whole pushes people away from open access, both in terms of career development (pressure comes from many directions pushing people to publish in the highest impact journals, irrespective of how they are run) and financially (universities heavily subsidize the costs of getting access to subscription journals, but do little to offset the costs of open access journals). There has been a tendency in the OA community (myself included) to put our hopes in young scientists (since the publishing behavior of most established scientists has proven themselves to be beyond amendment.  But that’s not fair or reasonable (something previous interactions with Bates helped me to appreciate). So there is value in this piece for shining light on a aspect of open access that hasn’t received a lot of press play.

I really like Eveleth’s writing. But I feel that this piece did not do justice to the past or present of open access in several important ways.

One of the central premises of the story is that costs associated with open access publishing (or open source software) make it a luxury that many can not afford. There is some truth to this – the move to open access publishing has shifted the way in which money is transferred from scientists to journals. Although it costs the system way less when people publish in open access journals – the average revenue for subscription journals is around $6,000 a paper, more than even the most expensive open access journals, and several times more than the cost of publishing in PLOS ONE – subscription costs are almost completely subsidized by universities, while open access charges rarely are. Thus the money it takes to publish in an open access journal comes out of research funds, while subscription costs do not.

However Eveleth raises this issue as if it’s something new – an unexpected, and unappreciated, side effect of open access publishing. But this is not a new problem. Supporters of open access have long been aware that until the ~$10b currently spent every year on subscriptions is diverted and used to support publishing in other ways, mechanisms must be developed to help authors whose grant funds are insufficient to cover up front charges to publish in open access journals. And Eveleth fails to mention the many initiatives designed to address this issue. PLOS (and many other OA publishers) offer fee waivers to authors who are unable to pay the publication fee, and to my knowledge PLOS has never turned away a paper on financial grounds. Furthermore, many funding agencies will cover the costs of OA for their grantees. And an  increasing number of universities have open access funds that will cover or help defray these costs for scientists at their institutions. Bates’ own University of Iowa has such a fund, although it is limited to researchers without grants.

And the situation with publishing costs is far more complicated than the story lets on. Many subscription journals also charge authors who publish there – in some cases more than it costs to publish in open access journals. Bates, for example, published an article in the Journal of Applied Physiology last year.  This journal charges authors a $50 submission fee, and $75 per page in the final PDF. At 9 pages, this article would have cost them $725. That’s a bit less than publishing in PLOS ONE, but not more than it would have cost to publish in Peer J. And this is low compared to the cost of publishing in other subscription journals. PNAS charges $1700 per article, for example. While it’s “free” to publish in ScienceNature or Cell, they charge ~$1000 if you have a color figure (which most articles do). Thus, at an institution that has funds to support open access publishing, it might actually be cheaper to publish in open access journals than in many subscription journals.

I am not denying that that people are under severe financial pressure these days, and there are certainly many authors who do not have access to institutional funds to cover these costs. It’s a systemic failure when funding agencies (e.g. the NIH) and institutions that claim they support open access publishing but leave authors in a position where they have to choose between publishing in open access journals and having some extra research funds. But it was incorrect of Eveleth to suggest that these financial challenges are unique to open access.

This has been a disturbing trend in journalism about open access lately. It’s become fairly common for people to take a problem with publishing, note that this problem applies to open access journals, and make this a problem for open access. The most egregious example was the “open access sting” carried out by John Bohannon in which he submitted a bogus paper exclusively to open access journals, found that many accepted it, and concluded that open access journals had a problem with peer review. If we are worried about ensuring all scientists have unfettered ability to publish their work – as we should be – we should worry about obstacles to publishing in all journals, not just open access ones.

Leaving the author charges issue, Eveleth  chose to wade briefly into the broader economics of scholarly publishing, quoting Alan Leshner, the outgoing publisher of Science, citing the fact that it costs $50,000,000 to publish Science, and complaining, “Somebody has to foot that bill.” But this point is left hanging – with no discussion or response. By doing this Eveleth says to her readers – many of whom, because the story was published outside of the science press, are learning about open access for the first time – that these is a valid and open criticism of open access, for which there is no response. When, in reality, Leshner has been saying the same thing for over a decade, and I and other open access advocates have a detailed response. I don’t necessarily expect Eveleth to rehash the whole open access debate, but to leave it seeming that this is some kind of new, unanswered critique of open access does not do justice to the history of this subject.

There are several problems with Leshner’s statement. Yes, it costs $50,000,000 to publish Science. And there is no way these costs could be covered by the thousand or so authors of research articles it publishes each year ($50,000 a paper would tax even the most well-heeled labs). But the fact that Science can not come up with a business model that would allow it to make the papers it publishes freely available is not a problem with open access, it’s a problem with Science.

One of the main reasons that Science is so expensive (its cost of ~$50,000 per paper is roughly 10x the industry average, which is already absurdly high) is that it employs highly paid editors to screen papers, and rejects the vast majority of them. I don’t know the exact numbers, but probably only one in fifty submissions is ultimately published. Thus, even with a fairly gilded staff, their cost per submitted paper is a much more reasonable $1,000. The problem with Science (and NatureCell and other high profile journals) is that this “review but reject most papers” is that it’s a relic of the print age, when space in a printed journal was limited by the cost of paper and shipping. But those costs are gone. And instead Science maintains a false scarcity to drive up the value of its brand. The alternative is a system in which we decouple the act of publishing and review – to have a system in which all papers are rigorously assessed, but where the assessment – whether good or bad – is simply published alongside the paper, rather than used as the basis for an absurd partitioning of papers into the 20,000 silos we call journals. (I’ve written about this more extensively here and here).  People might not agree this is a better solution – but given that Eveleth raised this issue, it is a disservice to the topic and her readers that she didn’t contextualize Leshner’s quote properly.

After raising the cost issue, Eveleth moves on to argue that open access is a also luxury of a non-financial sort – that only to people who are well established, and that publishing in open access journals is intrinsically bad for one’s career. I know that everybody believes that a paper in Science, Nature, Cell, NEJM or JAMA  is a ticket to career success, and that to some extent this becomes a self-fulfilling prophecy. But I think that – despite this near universal perception – that the effect isn’t nearly as strong as people think. There is certainly a correlation between career success and publishing in these journals. But as we all know, correlation does not necessarily imply causation, and I think it’s very possible that that people get jobs/grants/tenure as well as big 5 publications because the same criteria are applied in hiring, promotion and funding as are applied in selecting papers for publication.

I understand why Bates and most other scientists say that they will always choose to publish in these journals if offered the chance, because it is something under their control that they believe might lead to greater career success. But it’s disappointing that an excellent journalist like Eveleth just takes this assumption at face value instead of questioning it or at least pushing back on people who assert it as if it is a fact.

Finally, Eveleth makes the point that open access is elitist because it is particularly dangerous to pursue for scientists early in their careers. It is, of course, obviously true that scientists at different stages of their careers face different challenges. I am, personally, more able to take risks than, say a postdoc looking for a job, or an untenured, unfunded new PI. But nearly every paper I have ever published, and nearly every paper anyone ever publishes, has primary authors who are not well established. It’s the way science works. A graduate student, postdoc or other young scientists is the first author on the vast majority of papers published. And so nearly every paper involves someone in a vulnerable position in their career who would stand to benefit from whatever boost one gets from publishing a high impact paper. Thus the oft-repeated idea that there is some special subset of open access papers where the authors can safely publish in open access journals, while the authors of other papers can not, is, to a large extent, not true.

In saying that I am not trying to argue that Bates or any other scientist should be asked to gratuitously endanger their careers for the greater good. Or that everyone faces anything remotely like equal challenges in building a successful career in science. Rather I think it is important to note that the concerns Bates expresses apply far more broadly than the article implies. Indeed, as successful as open access publishing has been, it is one of the movement’s great failings that we have not succeeded in upending the system to the extent that people like Bates, who appears to genuinely support the ideals of open access, feel like publishing in open access journals is the best way to build their careers. Until we change this, the movement for greater openness in science will not succeed.

So, despite its failings in accurately representing open access, Eveleth’s piece serves a useful purpose. I believe the open access movement is driven primarily by anti-elitist sentiments – a desire to free information, to remove its control from the forces of commerce, and to break down the elitist hegemony of high-profile journals. But the elitist risks in open access are real. I don’t think they’re the fault of the open access movement – we have tried from the beginning to have the powers that control the funds used on subscriptions use them instead to fully subsidize open access fees; we have tried to undermine and ultimately destroy the impact factor driven culture of high-profile journals and their perceived role in hiring, funding and promotion. But the forces of inertia have, so far, been too strong. But our fault or not, it is crucial that we listen to the concerns of young scientists like Bates and try to make sure that open access really is accessible to everyone.

[NOTE: In the original version of this piece I suggested the Iowa open access fund would have covered Bates’ open access fees. It wouldn’t have as it was restricted to researchers without grants. I apologize for suggesting otherwise and for being an asshole about it.]

Posted in open access | Comments closed

Is Nature’s “free to view” a magnanimous gesture or a cynical ploy?

Macmillan, the publisher of Nature and 48 other Nature Publishing Group (NPG) journals, announced today that all research papers published in these journals would be “made free to read in a proprietary screen-view format that can be annotated but not copied, printed or downloaded”.

If you believe, as I do, that paywalls that restrict the free flow of scientific knowledge are a bad thing, then anything that removes some of these restrictions is a good thing.

This move is fairly typical of Nature as of late. Despite its place as one of the oldest and most august big Kahuna in the subscription publishing world, Nature – and especially its Digital Science division – have been far more attuned to the ways that the Internet has changed publishing than their competitors. And, because of the rise of open access publishing and funder efforts to provide access to their papers, people increasingly expect to be able to access scientific publications, and Nature is responding to that expectation.

There are really two parts of this announcement.

  1. A smallish (~100) media outlets and bloggers will be able to provide a link to Nature papers they are writing about that will allow readers that will allow them to access them for free.
  2. Subscribers to Nature and other NPG journals will be able to generate and share such links by email, on Twitter, etc…

It’s actually kind of brilliant on Nature‘s part. They are giving up absolutely nothing. Readers of news stories about Nature articles were never going to pay to access the actual articles (like other publisher Nature has tried a pay-per-view system that has completely failed). And individuals and institutions that subscribe to Nature aren’t going to give up the convenience of being able to read articles on demand for the challenge of finding a link on Twitter (unless someone were to set up a database of these links…. hmmm….).

And let’s remember that subscribers to Nature were already sharing copies of downloaded PDFs quite abundantly. This was not, as Nature argues happening in an inconvenient way in the dark corners of the Internet. This was happening in email and on Twitter. The problem was that Nature had no control over this sharing. So, really, they’re not changing people’s ability to access Nature very much – what they’re doing is changing where they access it – likely with the hope that they will figure out ways to monetize this attention.

Thus Nature gets lots of goodwill, more people reading their papers, and they lose nothing in the process. At least not immediately. Because the irony of a system like this is that it can’t ever actually do what it purports to do. If it ever actually made it possible to find and get free access to any Nature paper, then people actually would stop subscribing and they’d have to end this kind of access.

At the end of the day, this is a pretty cynical move. I’m sure the people at Nature want as many people as possible to read their articles. But this move is really about defusing pressure from various sources to provide free access. Yet Nature knows that they can’t really provide free access without giving up their lucrative subscription business model, which they are unwilling to do. So they do something that makes it seem like they are promoting free access, while doing nothing to address the real obstacle to free access – subscription publishing.

It is also worth noting how Nature is defining access down. First we had “open access” in which people can download, read, reuse and redistribute content. Then we had “public access” in which people can download and read content. Now we have “free access” in which people can read for free in a proprietary browser, and can’t download or print. This is going in the wrong direction, and it would be a disaster for science if – as Nature clearly hopes – this is the definition of access that sticks.

 

Posted in open access | Comments closed