On pastrami and the business of PLOS

Last week my friend Andy Kern (a population geneticist at Rutgers) went on a bit of a bender on Twitter prompted by his discovery of PLOS’s IRS Form 990 – the annual required financial filing of non-profit corporations in the United States. You can read his string of tweets and my responses, but the gist of his critique is this: PLOS pays its executives too much, and has an obscene amount of money in the bank.

Let me start by saying that I understand where his disdain comes from. Back when we were starting PLOS we began digging into the finances of the scientific societies that were fighting open access, and I was shocked to see how much money they were sitting on and how much their CEOs get paid. If I weren’t involved with PLOS, and I’d stumbled upon PLOS’s Form 990 now, I’d have probably raised a storm about it. I have absolutely no complaints about Andy’s efforts to understand what he was seeing – non-profits are required to release this kind of financial information precisely so that people can scrutinize what they are doing. And I understand why Andy and others find some of the info discomforting, and share some of his concerns. But having spent the last 15 years trying to build PLOS and turn it into a stable enterprise, I have a different perspective, and I’d like to explain it.

Let me start with something on which I agree completely with Andy completely, science publishing is way too expensive. Andy says he originally started poking into PLOS’s finances because he wanted to know where the $2,250 he was asked to pay to publish in PLOS Genetics went to, as this seemed like a lot of money to take a paper, have a volunteer academic serve as editor, find several additional volunteers to serve as peer reviewers, and then, if they accept the paper, turn it into a PDF and HTML version and publish it online. And he’s right. It is too much money.

That $2,250 is only about a third of the $6,000 a typical subscription journal takes in for every paper they publish, and that $6,000 buys access for only a tiny fraction of the world’s population, while the $2,250 buys it for everyone. But $2,250 is still too much, as is the $1,495 at PLOS ONE. I’ve always said that our goal should be to make it cost as little as possible to publish, and that our starting point should be $0 a paper.

The reality is, however, that it costs PLOS a lot more than $0 to handle a paper. We handle a lot of papers – close to 200 a day – each one different.  There’s a lot of manual labor involved in making sure the submission is complete, that it passes ethical and technical checks, in finding an editor and reviewers and getting them to handle the paper in a timely and effective manner. It then costs money to turn the collection of text and figures and tables into a paper, and to publish it and maintain a series of high-volume websites. All together we have a staff of well over 100 people running our journal operations, and they need to have office space, people to manage them, an HR system, an accounting system and so on – all the things a business has to have. And for better or worse our office is in San Francisco (remember that two of the three founders were in the Bay Area, and we couldn’t have started it anywhere else), which is a very expensive place to operate. We have always aimed to keep our article processing charges (APCs) as low as possible – it pains me every time we’ve had to raise our charges, since I think we should be working to eliminate APCs, not increase them. But we have to be realistic about what publishing costs us.

The difference in price between our journals reflects different costs. PLOS Biology and PLOS Medicine have professional editors handling each manuscript, so they’re intrinsically more expensive to operate. They also have relatively low acceptance rates, meaning a lot of staff time is spent on rejected papers, which generate no revenue. This is also the reason for the difference in price between our community journals like PLOS Genetics and PLOS ONE: the community journals reject more papers and thus we have to charge more per accepted paper. It might seem absurd to have people pay to reject other people’s papers, but if you think about it, that’s exactly what makes selective journals attractive – they have to publish your paper and reject lots of others. I’ve argued for a long time that we should do away with selective journals, but so long as people want to publish in them, they’re going to have this weird economics. And note this is not just true of open access journals – higher impact subscription journals bring in a lot more money per published paper than low impact subscription journals, for essentially the same reason.

Could PLOS do all these things more efficiently, more effectively and for less money? Absolutely. We, like most other big publishers, are using legacy software and systems to handle submissions, manage peer review and convert manuscripts into published papers. These systems are, for the most part, expensive, outdated and difficult or expensive (usually both) to customize. We are in a challenging situation since, until very recently, we weren’t in a position to develop our own systems for doing all these things, and we couldn’t just switch to cheaper or free system since they weren’t built to handle the volume of papers we deal with.

That said, it’s certainly possible to run journals much, much more cheaply. It costs the physics pre-print arXiv something like $10 a paper to maintain its software, screening and website. There are times when I wish PLOS had just hacked together a bunch of Perl scripts and hung out a shingle and built in new features as we needed them. But part of what made PLOS appealing at the start is that it didn’t work that way – for better or worse it looked like a real journal, and this was one of the things that made people comfortable with our (at the time) weird economic model. I’m not sure this is true anymore, and if I were starting PLOS today I would do things differently, and think I could do things much less expensively. I would love it if people would set up inexpensive or even free open access biology journals – it’s certainly possible with open source software and fully volunteer labor – and for people to get comfortable with biomedical publishing basically being no different than just posting work on the Internet, with lightweight systems for peer review. That has always seemed to me to be the right way to do things. But PLOS can’t just pull the plug on all the things we do, so we’re trying to achieve the same goal by investing in developing software that will make it possible to do all of the things PLOS does faster, better and cheaper. We’re going to start rolling it out this year, and, while I don’t run PLOS and can’t speak for the whole board, I am confident that this will bring our costs down significantly and that we will ultimately be in a position to reduce prices.

Which brings us to issue number two. Andy and a lot of other people took umbrage at the fact that PLOS has margins of 20% and has ~$25 million dollars in assets. Again, I understand why people look at these numbers and find them shocking – anything involving millions of dollars always seems like a lot of money. But this is a misconception. Both of these numbers represent nothing more than what is required for PLOS to be a stable enterprise.

I’ll start by reminding people that PLOS is still a relatively young company, working in a rapidly changing industry. Like most startups, it took a long time for PLOS to break even. For the first nine years of our existence we lost money every year, and were able to build our business only because we got strong support from foundations that believed in what we were doing. Finally, in 2011, we reached the point where we were taking in slightly more money than we were spending, allowing us to wean ourselves of foundation support. But we still had essentially no money in the bank, and that’s not a good thing. Good operating practices for any business dictate that the company have money in the bank to cover a downturn in revenue. This is particularly the case with open access publishers, since we have no guaranteed revenue stream – in contrast to subscription publishers who make long-term subscription deals. What’s more, this industry is changing rapidly, with the number of papers going to open access journals growing, but many new open access publishers entering the market. So it’s very hard for us to predict what our business is going to look like from year to year, while a lot of our expenses, like rent, software licenses and salaries, have to be paid before revenue they enable comes in. The only way to survive in this market is to have a decent amount of money in the bank to buffer against the unpredictable. If anything, I am told by people who spend their lives thinking about these things, we’re cutting things a little close. So, while 20% margins may seem like a lot, given our overall financial situation and the fact that we’ve been profitable for only five years, I think it’s actually a reasonable compromise between keeping costs as low as we can and ensuring that PLOS remains financially stable while also allowing us to make modest investments in technology that will make publishing better and cheaper in the long run.

Just to put these numbers in perspective for people who (like me) aren’t trained to think about these things, I had a look at the finances of a large set of scientific societies. I looked primarily at the members of FASEB, a federation of most of the major societies in molecular biology. Many of them have larger operating margins, and far larger cash reserves than PLOS. And I haven’t found one yet that doesn’t have a larger ratio of assets to expenses than PLOS does. And these are all organizations that have far more stable revenue streams than PLOS does. So I just don’t think it’s fair to suggest that either PLOS’s margins or reserves are untoward.

Indeed these numbers represent something important – that PLOS has become a successful business. I’ll once again remind people that one of the major knocks against open access when PLOS started was that we were a bunch of naive idealists (that’s the nicest way people put it) who didn’t understand what it took to run a successful business. Commercial publishers and societies alike argued repeatedly to scientists, funders and legislators that the only way to make money in science publishing was to use a subscription model. So it was absolutely critical to the success of the open access movement that PLOS not only succeed as a publisher, but that we also succeed as a business – to show the commercial and society publishers that their principal argument for why they refused to shift to open access was wrong. Having been the recipient of withering criticism – both personally and and as organization – about being too financially naive, it’s ironic and a bit mind boggling to all of a sudden be criticized for having created too good of a business.

Now despite that, I don’t want people to confuse my defense of PLOS’s business success with a defense of the business it’s engaged in. While I believe the APC/service business model PLOS has helped to develop is far far superior to the traditional subscription model, because it does not require paywalls, but I’ve never been comfortable with the APC business model in an absolute sense (and I recognize the irony of my saying that) because I wish science publishing weren’t a business at all. When we started PLOS the only way we had to make money was through APCs, but if I had my druthers we’d all just post papers online in a centralized server funded and run by a coalition of governments and funders, and scientists would use lightweight software to peer review published papers and organize the literature in useful ways. And no money would be exchanged in the process. I’m glad that PLOS is stable and has shown the world that the APC model can work, but I hope that we can soon move beyond it to a very different system.

Now I want to end on the issue that seemed to upset people the most – which is the salaries of PLOS’s executives. I am immensely proud of the executive team at PLOS – they are talented and dedicated. They make competitive salaries – and we’d have trouble hiring and retaining them if they didn’t. The board has been doing what we felt we had to do to build a successful company in the marketplace we live in – after all, we were founded to fix science publishing, not capitalism. But as an individual I can’t help but feel that’s a copout. The truth is the general criticism is right. A system where executives make so much more money that the staff they supervise isn’t just unfair, it’s ultimately corrosive. It’s something we all have to work to change, and I wish I’d done more to help make PLOS a model of this.

Finally, I want to acknowledge a tension evident in a lot of the discussion around this issue. Some of the criticism of PLOS – especially about margins and cash flow – have been just generally unfair. But others – about salaries and transparency – reflect something important. I think people understand that in these ways PLOS is just being a typical company. But we weren’t founded to just be a typical company – we were founded to be different and, yes, better, and people have higher expectations of us than they do a typical company. I want it to be that way. But PLOS was also not founded to fail – that would have been terrible for the push for openness in science publishing.I am immensely proud of PLOS’s success as a publisher, agent for change, and a business – and of all the people inside and outside of the organization who helped achieve it. Throughout PLOS’s history there were times we had to choose between abstract ideals and the reality of making PLOS a successful business, and I think, overall, we’ve done a good, but far from perfect, job of balancing this tension. And moving forward I personally pledge to do a better job of figuring out how to be successful while fully living up to those ideals.

 

Posted in open access, PLoS | Comments closed

Berkeley’s Handling of Sexual Harassment is a Disgrace

What more is there to say?

Another case where a senior member of the Berkeley faculty, this time Berkeley Law Dean Sujit Choudhry, was found to have violated the campus’s sexual harassment policy, and was given a slap on the wrists by the administration. Astronomer Geoff Marcy’s punishment for years of harassment of students was a talking to and a warning never to do it again, and now Choudhry was put on some kind of secret probation for a year, sent for additional training, and docked 10% of his meager $470,000 a year salary.

Despite a constant refrain from senior administrators that it takes cases of sexual harassment seriously, the administrations actions demonstrate that it does not. What is the point of having a sexual harassment policy if violations of it have essentially no sanctions? Through its responses to Marcy and Choudhry, it is now clear that the university views sexual harassment by its senior male faculty not as what it is – an inexcusable abuse of power that undermines the university’s entire mission and has a severe negative effect on our students and staff – but rather as a mistake that some faculty make because they don’t know better.

If the university wants to show that it is serious about ending sexual harassment on campus, then it has to take cases of sexual harassment seriously. This means being unambiguous about what is and is not acceptable behavior, and real consequences when people violate the rules. Faculty and administrators who engage in harassing behavior don’t do it by accident. They make a choice to engage in behavior they either know is wrong, or have no excuse for not knowing is wrong. And, at Berkeley at least, they do so knowing that if they get caught, the university will respond by saying “Bad boy. Don’t do that again. We’re watching you now.” Does anything think this is an actual deterrent?

Through its handling of the Marcy,  Choudhry and other cases, the Berkeley administration has shown utter contempt for the welfare of its students and staff. It has shown that it views its job not to create an optimal environment for education by ensuring that faculty behavior is consistent with the university’s mission, but rather to protect faculty, especially famous ones, from the consequences of their actions.

It is now clear that excuse making and wrist slapping in response to sexual harassment is so endemic in the Berkeley administration that it might as well be official policy. And just like there is no excuse for sexual harassing students and staff, there is no excuse for sanctioning this kind of the behavior. It’s time for the administrators – all of them – who have repeatedly failed the campus community on this issue to go. It’s the only way forward.

BerkeleyOrgChart

Posted in Uncategorized | Comments closed

I’m Excited! A Post Pre-Print-Posting-Powwow Post

I just got back from attending a meeting organized by a new group called ASAPbio whose mission is to promote the use of pre-prints in biology.

I should start by saying that I am a big believer in this mission. I have been working for two decades to convince biomedical researchers that the Internet can be more than a place to download PDFs from paywalled journal websites, and universal posting of pre-prints – or “immediate publication” as I think it should be known – is a crucial step towards the more effective use of the Internet in science communication. We should have done this 20 years ago, when the modern Internet was born, but better late than never.

There were reasons to be skeptical about this meeting. Change needs to happen on the ground not in conference halls – I have been to too many publishing meetings that involved a lot of great talks about the problems with publishing and how to fix them, but which didn’t amount to much because these calls weren’t translated into action. Second, the elite scientists, funders and publishers who formed the bulk of the invite-only ASAPbio attendees have generally been the least responsive to calls to reform biomedical publishing (I understand why this was the target group – while young, Internet-savvy scientists tend to be much more supportive in principle, they are reluctant to act because of fears about how it will affect their careers, and are looking towards the establishment to take the first steps). Finally, my new partner-in-crime Leslie Vosshall and I spent a lot of time and energy trying to rally support for pre-prints online leading up to the meeting, and it wasn’t like people were knocking down the doors to sign on to the cause.

However, I wouldn’t have kept at this for almost half my life it I wasn’t an eternal optimist, and I went into the meeting hoping, if not believing, that this time might be different. And I have to say I was pleasantly surprised. By the end of the meeting’s 24 hours it seemed like nearly everyone in attendance was sold on the idea that biomedical researchers should all post pre-prints of their work, and had already turned their attention to questions about how to do it. And there was a surprisingly little resistance to the idea that post-publication review of papers initially posted as pre-prints could, at least in principle, fulfill the functions that pre-publication review currently carries out. That’s not to say there weren’t concerns and even some objections – there were, as I will discuss below. But these were all dealt with to varying degrees, and there seemed to be a general attitude these concerns can be addressed, and did not constitute reasons not to proceed.

Honestly, I don’t think any new ideas emerged from the meeting. Everything that was discussed has been discussed and written about extensively before. But the purpose of the meeting was not to break new ground. Rather I think the organizers were trying to do three things (I’m projecting a bit here since I wasn’t one of the organizers):

  • To transfer knowledge from the small group of us who have been in the trenches of this movement to prominent members of the research community who are open to these ideas, but who hadn’t really ever given them much thought or attention
  • To make sure potential pitfalls and challenges of pre-prints were discussed. Although the meeting was dominated by members of the establishment, there were several young-PIs and postdocs, representatives of different fields and a few international participants, who raised a number of important issue and generally kept the meeting from becoming a self-congratulatory elite-fest.
  • To inspire everyone to act in tangible ways to promote pre-print use.

And I think the meeting was highly effective all three regards. For those of you who weren’t there and didn’t follow online or on video, here’s a rough summary of what happened (there are archived videos here).

The opening night was dominated by a keynote talk from Paul Ginsparg, who in 1991 started an online pre-print server for physics that is now the locus for the initial publishing of essentially all new work in physics, mathematics and some areas of computer science. Paul is a personal hero of mine – for what he did with arXiv and for just being a no bullshit advocate for sanity in science publishing – so I was bummed that he couldn’t make it person because of weather-related travel issues. But his appearance as a giant head on a giant screen by video-conference was a fitting representation for his giant place in pre-print history. His talk was very effective in squashing any of the typical gloom-and-doom about the end of quality science that often happens when pre-prints are discussed. A little bit of biology exceptionalism came up in the Q&A (“Yeah, it works for physics, but biology is different…”) but I thought Paul put most of those ideas to rest, especially the idea that all physics is done by giant groups working underground surrounded by large metal tubes.

The second day had two sessions, each structured around a series of a dozen or so five minute talks, followed by breakout sessions and then discussion. The morning focused on why people don’t use pre-prints – concerns about establishing priority, being able to publish in journals, getting jobs and funding – and how to address these concerns, while the afternoon sessions were about how to use pre-prints in evaluating papers and scientists and in finding and organizing published scientific information.

I can’t summarize everything that was discussed, but I have a lot of  thoughts on the meeting and where to go from here in no particular order:

I was surprised at how uncontroversial pre-prints were

Having watched the battles over Harold Varmus’ proposal to have biologists embrace pre-prints in 1999, and having taken infinite flak over the last 20 years for promoting a model of science communication based on immediate publication and post-publication peer review, I expected the idea that biologists should make their work initially available as pre-prints to be controversial. But it wasn’t. Essentially everyone at the meeting embraced the basic concept of pre-prints from the beginning, and we spent most of the meeting discussing details about how a pre-print system in biology can and should work, and how to build momentum for pre-print use.

I honestly don’t know how this happened. Pre-prints are close to invisible in biology (we didn’t really have a viable pre-print server until a year or so ago) and other recent efforts to promote pre-print usage in biology have been poorly received. There is lots of evidence from social media that most members of the community fall somewhere in the skeptical to hostile range when discussing pre-prints. Some of it is selection bias – people hostile to pre-prints weren’t likely to agree to come to a meeting on pre-prints that they (mostly) had to pay their own way to attend.

But I think it’s bigger than that. I think the publishing zeitgeist may have finally shifted. I’ve felt this way before, so I’m not sure I’m a good witness. But I think people are really ready for it this time. The signs were certainly there: after all Ron Vale, who organized ASAPbio, is no publishing radical – his publishing record is everything I’ve been trying to fight against for the last 20 years. But now he’s a convert, at least on pre-prints, and others are following suit. I don’t know whether it’s because all our work has finally paid off, or if it’s just time. The Internet has become so ingrained in our lives, maybe people finally realized how ridiculous it is that people all over the world could watch the ASAPbio meeting streaming live on their computers, but they have to wait months and months and months to be able to read about our latest science.

In the end I don’t really care why things seem to have changed. Even as I redouble my efforts to make sure this moment doesn’t elude us, I’m going to celebrate – this has been a long time coming.

Glamour journals remain a huge problem

One of the most shocking moments of this meeting came in a discussion right before the close about how to move forward to make pre-prints work. Marc Kirschner, a prominent cell biologist, made the suggestion that people at the meeting publish pre-prints of their papers at the time of submission so long as it is OK with the journal they plan to submit it to. I don’t think Kirschner was trying to set down some kind of abstract principle. Rather I think he was speaking to the reality that no matter how effectively we sell pre-prints, in the short run most scientists are still going to strive to put their work in the highest profile journals they can get them into; and we can make progress with pre-prints if we point out that a lot of journals people choose to publish in for other reasons allow them to post pre-prints and they should avail themselves of this opportunity.

This was the one time at the meeting where I lost my cool (a publishing meeting where I lose my cool only once is a first). It’s not that it surprises me that journals have this kind of hold on people. But I was still flabbergasted that after a meeting whose entire point was that it would be really good for science if people posted pre-prints, someone could suggest that we should give journals – not scientists – the power to decide whether pre-print posting is okay. And I couldn’t believe that people in the audience didn’t rise up in outrage at the most glaring and obvious example of how dysfunctional and toxic – one might even say dystopian – our relationship to journals is.

This is why I maintain my position – echoed by Vitek Tracz at the meeting, and endorsed by a handful of others – that science communication is never going to function optimally until we rid ourselves of the publish or reject paradigm employed by virtually all journals, and  until we and stop defining our success as scientists based on whether or not we could winkle our way into one of the uber-exclusive slots in glamorous journals. If anything is going to stop the move towards pre-prints, it’s going to be our proclivity for “glamor humping” (as blogger DrugMonkey has aptly dubbed this phenomenon). And if anything has the power to undermine the benefits of pre-prints, it’s if we allow this mentality to dominate in the post-journal world.

People have weird views of priority

One of the few new things I learned at this meeting is how obsessed a large number of people are with technical definitions of priority. We spent 30 minutes talking about whether pre-prints should count in establishing priority for discoveries. First of all, I can’t believe there’s any question about this – of course they should! But more importantly who thinks that questions of priority actually get decided by carefully scrutinizing who published what, when and on what date? It’s a lovely scholarly ideal to imagine that there’s some kind of court of science justice where hearings are held on every new idea or discovery, and a panel of judges looks at everything that’s been published or said about the idea is presented, and they then rule on who really was the first to publish, or present, the idea/discovery in a sufficiently complete form to get credit for it.

But I got news for all the people counting submission dates on the head of a pin – outside of patent cases, where such courts really do exist, at least in theory, that ain’t the way it works. True priority is constantly losing out in the real world, where who you are, where you work, where you publish and how you sell yourself are often far more important than submission or publication dates in determining who gets credit (and its trappings) for scientific advances.

Cell Press has a horrible, but kind of sane, policy on pre-prints

One of the things that I think a lot of people coming to the meeting didn’t realize is that many journals are perfectly fine with people posting pre-prints of articles that are being considered by the journal. Some, like eLife, PLOSPeerJ and Genetics actively encourage it. Others, like EMBOPNASScience and all Nature journals unambiguously allow pre-print posting. On the flip side, journals from the American Chemical Society and some other publishers will not accept papers if they were posted as pre-prints. And then there’s Cell.

Cell‘s policy is, on the surface, hard to parse:

If you have questions about whether posting a manuscript or data that you plan to submit to this journal on an openly available preprint server or poster repository would affect consideration, we encourage you to contact an editor so that we may provide more specific guidance. In many cases, posting will be possible.

Fortunately, Emilie Marcus, CEO of Cell Press and Editor-in-Chief of Cell, was at the meeting to explain it to us. Her response was, and I’m paraphrasing but I think I’m capturing it correctly, is that they are happy to publish papers initially posted as pre-prints so long as the information in the paper had not already been noticed by people in the field. In other words, it’s ok to post pre-prints so long as nobody noticed the pre-print. That is, they are rather unambiguously not endorsing the point of pre-prints, which is to get your work out to the community more quickly and effectively.

This is a pretty cynical policy. Cell clearly wants to get credit for being down with pre-prints without actually sanctioning them. But I actually found Marcus’s explanation of the policy to make sense, in a way. She views Cell as a publisher, and, as such, its role is to make information public. If that information has already been successfully conveyed by other means, then the role of publisher is no longer required.

This is obviously a quaint view – Cell is technically a publisher, but it’s more important role is as a selector of research that it deems to be interesting and important. So I think it’s more appropriate to look at this as a business decision. In refusing to help make pre-prints a reality, Elsevier and Cell Press are acting as if they believe pre-prints are a threat to their bottom line. And they’re right. Because if pre-prints become universal, who in their right mind is going to subscribe to Cell?

Maybe the other journals that endorse pre-prints are banking on the symbiosis between pre-prints and journals that exists in physics being extended to biomedicine. In questions after his talk Ginsparg said that ~80% of papers published in the arXiv are ultimately published in a peer-reviewed journal. And these journals are almost exclusively subscription based. So why don’t libraries cancel these subscriptions? The optimistic answer (for those who like journals) is that libraries want to support the services journals provide and are willing to pay for them even if they’re not providing access to the literature. This may be true. But the money in physics publishing is a drop in the bucket compared to biomedicine, and I just can’t see libraries continuing to spend millions of dollars per year on subscriptions to journals that provide paywalled access to content that is freely available elsewhere. I could be wrong, of course, but it seems like Elsevier, who for all their flaws clearly know how to make money, in this case agrees with me.

I don’t know what effect the Cell policy will have in the short run. I’d like to think people who are supportive of pre-prints will think twice before sending a paper to Cell in the future because of this policy (of course I’d like it if they never considered Cell in the first place, but who am I kidding). But I suspect this is going to be a drag on the growth of pre-prints — how big a drag, I don’t know, but it’s something we’re probably going to have to work around.

There are a lot of challenges in building a fair and effective pre-print system

The position of young scientists on pre-prints is interesting. On the one hand, they have never scienced without the Internet, and are accustomed to being able to get access to information easily and quickly. On the other hand, they are afraid that the kinds of changes we are pushing will make their lives more difficult, and will make many of the pathologies in the current system worse, especially those biased against them, worse. Even those who have no reservations about the pre-prints and/or post-publication review, don’t feel like they’re in a position to lead the charge.

This is one of the biggest challenges we have moving forward. I have no doubt that science communication systems build around immediate publication and post-publication review can be better for both science and scientists. But that doesn’t mean they automatically will be better. Indeed, I share many of other’s concerns about turning science into an even bigger popularity contest than it already is; about making it easier for powerful scientists to reinforce their positions and thwart their less powerful competitors; about increasing the potency of biases the myriad biases that poison training, hiring, promotion and funding; about making the process of receiving feedback on your work even less pleasant and uncollegial than it already is; and about increasing the incentives for scientists to prioritize glamour over doing rigorous, high-quality and durable work.

I will write more elsewhere about these issues and how I think we should try to address them. But it is of paramount importance that everybody who is trying to promote the move to pre-prints and beyond, and who is building systems to do this, be mindful of all these risks and do everything in their power to make sure the new systems work for everyone in science. We have to remember that for every bigshot who opposes pre-prints because they want to preserve their ability to publish in Cell, there are hundreds of scientists who just want to preserve their ability to do science. If this latter group doesn’t believe that pre-print posting is good for them, we will not only fail to convince them to join us on this path, but we run the serious risk of making science worse than it already is. And that would be a disaster.

Will attendees of the meeting practice what they preached

Much of the focus of the meeting organizers was on getting people who attended the meeting to sign on to a series of documents expressing various types of commitment to promoting pre-prints in biomedicine (you can see these on the ASAPbio site). These documents are fairly strong, and I will sign them. But I’m sick of pledges. I’ve been down this path too many times before. People come to meetings, they sign a document saying they do all sorts of great stuff, and then they forget about it.

The only thing that matters to me is making sure that the people who attended the meeting and who seemed really energized about making pre-prints work start to put this enthusiasm into practice immediately. I look forward to quick, concrete action from funders. But the immediate goal of the scientists at the meeting or who support its goals must be to start posting pre-prints. This is especially true of prominent, senior scientists. There were four Nobelists at the meeting, many members of national academies, and other A-list scientists. It’s a small number of people in the grand scheme of things, but if these scientists demonstrate that they are really committed to making pre-prints by starting to post pre-prints in the next week (I suspect that most people at this level have a paper under review at all time). I am confident that their commitment is genuine – indeed some have already posted pre-prints from their labs since the meeting ended yesterday.

Obviously we don’t want pre-prints to be the domain of the scientific 1%. But we have to start somewhere, and if people who have nothing to lose won’t lead the way, then it will never happen. But it seems like they actually are leading the way. There’s tons more hard work to do, but let’s not miss this opportunity. The rainbow unicorn is watching.

ArcLive Rainbow Unicorn

 

Posted in open access, science | Tagged , , | Comments closed

The Villain of CRISPR

Eric LanderThere is something mesmerizing about an evil genius at the height of their craft, and Eric Lander is an evil genius at the height of his craft.

Lander’s recent essay in Cell entitled “The Heroes of CRISPR” is his masterwork, at once so evil and yet so brilliant that I find it hard not to stand in awe even as I picture him cackling loudly in his Kendall Square lair, giant laser weapon behind him poised to destroy Berkeley if we don’t hand over our patents.

This paper is the latest entry in Lander’s decades long assault on the truth. During his rise from math prodigy to economist to the de facto head of the public human genome project to member of Obama’s council of science advisors to director of the powerful Broad Institute, he has shown an unfortunate tendency to treat the truth as an obstacle that must be overcome on his way to global scientific domination. And when one of the world’s most influential scientists treats science’s most elemental and valuable commodity with such disdain the damage is incalculable.

CRISPR, for those of you who do not know, is an anti-viral immune system found in archaea and bacteria, that until a few years ago, was all but unknown outside the small group of scientists, mostly microbiologists, who had been studying it since its discovery a quarter century ago. Interest in CRISPR spiked in 2012 when a paper from colleagues of mine at Berkeley and their collaborators in Europe described a simple way to repurpose components of the CRISPR system of the bacterium Streptococcus pyogenes to cut DNA in a easily programmable manner.

Such capability had been long sought by biologists, as targeted DNA cleavage is the first step in gene editing – the ability to replace one piece of DNA in an organism’s genome with DNA engineered in the lab. This 2012 paper from Martin Jinek and colleagues was quickly joined by a raft of others applying the method in vivo, modifying and improving it in myriad ways, and utilizing its components for other purposes. Among the earliest was a paper from Le Cong and Fei Ann Ran working at Lander’s Broad Institute which described CRISPR-based gene editing in human and mouse cells.

Now, less than four years after breaking onto the gene-editing scene, virtually all molecular biology labs are either using, or planning to use, CRISPR in their research. And amidst this explosion of interest, fights have erupted over who deserves the accolades that usually follow such scientific advances, and who owns the patents on the use of CRISPR in gene editing.

The most high-profile of these battles pit Berkeley against the Broad Institute, although researchers from many other institutions made important contributions. Jinek’s work was carried out in the lab of Berkeley’s Jennifer Doudna, and in close collaboration with Emmanuelle Charpentier, now at the Max Planck Institute for Infection Biology in Berlin; while Cong and Ran were working under the auspices of the Broad’s Feng Zhang. Interestingly, the prizes for CRISPR have largely gone to Doudna and Charpentier, while, for now at least, the important patents are held by Zhang and the Broad. But this could all soon change.

There has been extensive speculation that CRISPR gene editing will earn Doudna and Charpentier a Nobel Prize, but there has been considerable lobbying for Zhang to join them (Nobel Prizes are, unfortunately, doled out to a maximum of three people). On the flip side, the Broad’s claim to the patent is under dispute, and is the subject a legal battle that could turn into one of the biggest and most important in biotechnology history.

I am, of course, not a disinterested party. I know Jennifer well and an thrilled that her work is getting such positive attention. I also stand to benefit professionally if the patents are awarded to Berkeley, as my department will get a portion of what are likely to be significant proceeds (I have no personal stake in any CRISPR-related patents or companies).

But I if I had my way, there would be no winner in either of these fights. The way prizes like the Nobel give disproportionate credit to a handful of individuals is an injustice to the way science really works. When accolades are given exclusively to only a few of the people who participated in an important discovery, it by necessity denies credit to countless other people who also deserve it. We should celebrate the long series of discoveries and inventions that brought CRISPR to the forefront of science, and all the people who participated in them, rather than trying to decide which three were the most important.

And, as I have long argued, I believe that neither Berkeley nor MIT should have patents on CRISPR, since it is a disservice to science and the public for academic scientists to ever claim intellectual property in their work.

Nonetheless, these fights are underway. Which beings us back to Dr. Lander. Although he had nothing to do with Zhang’s CRISPR work, as Director of the Broad Institute, he has taken a prominent role in promoting Zhang’s case for both prizes and patent. But rather than simply go head-to-head with Doudna and Charpentier, Lander has crafted an ingenious strategy that is as clever as it is dishonest (see Nathaniel Comfort’s fantastic “A Whig History of CRISPR” for more on this). Let’s look at the way Lander’s argument is crafted.

To start, Lander cleaves history into two parts – Before Zhang and After Zhang – defining the crucial event in the history of CRISPR to be the demonstration that CRISPR could be used for gene editing in human cells. This dividing line is made explicit in Figure 2 of his “Heroes” piece, which maps the history of CRISPR with circles representing key discoveries. The map is centered on a single blue dot in Cambridge, marking Zhang as the sole member of the group that carried out the “final step of biological engineering to enable genome editing”, while everyone who preceded him gets labeled as a green natural historian or red biochemist.

Screen Shot 2016-01-24 at 7.49.00 PM

(Note also how he distorted the map of the world so that the Broad lies almost perfectly in the center. What happened to Iceland and Greenland? How did Europe get so far south and so close to North America? And what happened to the rest of the world? Where’s Asia, for example? Shouldn’t there be a big blue circle in Seoul?)

While some lawyer might find this argument appealing, it is a scientifically absurd point of view. For the past decade, researchers, including Zhang, have been using proteins – zinc finger nucleases and TALENs – engineered to cut DNA in specific places to carry out genome editing in a variety of different systems. If there was a key step in bringing CRISPR to the gene editing party, it was the demonstration that its components could be used as a programmable nuclease, something that arose from a decade’s worth of investigation into how CRISPR systems work at the molecular level. Once you have that, the application to human cells, while not trivial, is obvious and straightforward.

The best analogy for me is the polymerase chain reaction (PCR) another vital technique in molecular biology that emerged from the convergence of several disparate lines of work over decades, and which gained prominence with the work of Kary Mullis, who demonstrated an efficient method for amplifying DNA sequences in vitro. Arguing that Zhang deserves singular credit for CRISPR gene editing is akin to arguing that whomever was the first to amplify human DNA using PCR should get full credit for its invention. (And I’ll note that the claim that Zhang was unambiguously the first to do this is questionable – see this and this for example).

I want to be clear that in arguing against giving exclusive credit to Zhang, I am not arguing for singular credit to go to any other single group, as I think this does not do justice to the way science works. But if you are going to engage in this kind of silliness, one should at least endeavor to do it honestly. The only reason one would ever argue that CRISPR credit should be awarded to the person who first deployed it in human cells is if you decided in advance that full credit should go to Zhang and you searched post facto for a reason to make this claim.

Even Lander seems to have sensed that he had to do more than just make a tenuous case for Zhang – he had to also tear down the case for Doudna and Charpentier. And this wasn’t going to be easy, since their paper preceded Zhang’s, and they were already receiving widespread credit in the biomedical community for being its inventors. Here is where his evil genius kicks in. Instead of taking Doudna and Charpentier on directly, he did something much more clever: he wrote a piece celebrating the people whose work had preceded and paralleled theirs.

This was an evil genius move for several reasons:

First, the people whose work Lander writes about really are deserving of credit for pioneered the study of CRISPR, and they really have been unfairly written out of the history in most stories in the popular and even scientific press. This established Lander as the good guy, standing up to defend the forgotten scientists, toiling in off-the-beaten-path places. And even though, in my experience, Doudna and Charpentier go out of their way to highlight this early work in their talks, Lander’s gambit makes them look complicit in the exclusion.

Second, by going into depth about the contributions of early CRISPR pioneers, Lander is able to almost literally write Doudna and Charpentier (and, for that matter, the groups of genome-editing pioneer George Church and Korean scientist Jin-Soo Kim, whose CRISPR work has also been largely ignored) out of this history. They are mentioned, of course, but everything about the way they are mentioned seems designed to minimize their contributions. They are given abbreviated biographies compared to the other scientists he discusses. And instead of highlighting the important advances in the Jinek paper, which were instrumental to Zhang’s work, Lander focuses instead on the work of Giedrius Gasiunas working in the lab of Virginijus Siksnys in Lithuania. Lander relates in detail how they had similar findings to Jinek and submitted their paper first, but struggled to get it published, suggesting later in the essay that it was Doudna and Charpentier’s savvy about the journal system, and not their science, that earned them credit for CRISPR.

The example of Gasuinas and Siksnys is a good one for showing how unfair the system we have for doling out credit, accolades and intellectual property in science can be. While Gasuinas did not combine the two RNA components of the CRISPR-Cas9 system into a single “guide RNA” as was done by Jinek – a trick used in most CRISPR applications – they demonstrated the ability to reprogram CRISPR-Cas9, and were clearly on the path to gene editing. And neither Jinek or Gasuinas’s work would have been possible without the whole body of CRISPR work that preceded them.

But the point of Lander’s essay is not to elevate Siksnys, it is, as is made clear by the single blue circle on the map, to enshrine Zhang. His history of CRISPR, while entertaining and informative, is a cynical ploy, meant to establish Lander’s bonafides as a defender of the little person, so that his duplicity in throwing Siksyns under the bus when he didn’t need him anymore wouldn’t be so transparent.

What is particularly galling about this whole thing, is that Lander has a long history of attempting to rewrite scientific history so that credit goes not to the forgotten little people, but to him and those in his inner circle. The most prominent example of this is the pitched battle for credit for sequencing the human genome, in which Lander time and time again tried to rewrite history to paint the public genome project, and his role in it, in the most favorable light. 

Indeed, far from being regarded as a defending of lesser known scientists, Lander is widely regarded as someone who plays loose with scientific history in the name of promoting himself and those around him. And “Heroes of CRISPR” is the apotheosis of this endeavor. The piece is an elaborate lie that organizes and twists history with no other purpose than to achieve Lander’s goals – to win Zhang a Nobel Prize and the Broad an insanely lucrative patent. It is, in its crucial moments, so disconnected from reality that it is hard to fathom how someone so brilliant could have written it.

It’s all too easy to brush this kind of thing aside. After all Lander is hardly the first scientist to twist the truth in the name of glory and riches. But what makes this such a tragedy for me is that, in so many ways, Lander represents the best of science. He is a mathematician turned biologist who has turned his attention to some of the most pressing problems in modern biomedicine. He has published smart and important things. As a mathematician turned biologist myself, it’s hard for me not to be more than a little proud that a math whiz has become the most powerful figure in modern biology. And while I don’t like his scientific style of throwing millions of dollars at every problem, he has built an impressive empire and empowered the careers of many smart and talented people whose work I greatly value and respect.

But science has a simple prime directive: to tell the truth. Nobody, no matter how powerful and brilliant they are is above it. And when the most powerful scientist on Earth treats the truth with such disdain, they become the greatest scientific villain of them all.

Posted in Berkeley, CRISPR, science, University of California | Comments closed

Coupling Pre-Prints and Post-Publication Peer Review for Fast, Cheap, Fair, and Effective Science Publishing

Leslie Vosshall and I have written the following white paper as a prelude to the upcoming ASAP Bio meeting in February aimed at promoting pre-print use in biomedicine. We would greatly value any comments, questions or concerns you have about the piece or what we are proposing.


[PDF Version]

Coupling Pre-Prints and Post-Publication Peer Review for Fast, Cheap, Fair, and Effective Science Publishing

Michael Eisen1,2 and Leslie B. Vosshall 3,4

1 Department of Molecular and Cell Biology and 2 Howard Hughes Medical Institute, University of California, Berkeley, CA. 3 Laboratory of Neurogenetics and Behavior and 4 Howard Hughes Medical Institute, The Rockefeller University, New York, NY.

mbeisen@berkeley.edu; leslie@rockefeller.edu

Scientific papers are the primary tangible and lasting output of a scientist. It is how we communicate our discoveries, and how we are evaluated for hiring, promotion, and prizes. The current system by which scientific papers are published predates the internet by several hundred years, and has changed little over centuries.

We believe that this system no longer serves the needs of scientists.

  1. It is slow. Manuscripts spend an average of nine months in peer review prior to publication, and reviewers increasingly demand more data and more experiments to endorse a paper for publication. These delays massively slow the dissemination of scientific knowledge.
  2. It is expensive. We spend $10 billion a year on science and medical journal publishing, over $6,000 per article, and increasingly these costs are coming directly from research grants.
  3. It is arbitrary. The current system of peer review is flawed. Excellent papers are rejected, and flawed papers are accepted. Despite this, journal name continues to be used as a proxy for the quality of the paper.
  4. It is inaccessible. Even with the significant efforts of the open-access publishing movement, the vast majority of scientific literature is not accessible without a subscription.

In view of these problems, we strongly support the goal of ASAP Bio to accelerate the online availability of biomedical research manuscripts. If all biomedical researchers posted copies of their papers when they were ready to share them, these four major pathologies in science publishing would be cured.

The goal of ASAP Bio to get funders and other stakeholders to endorse the adoption of pre-prints is laudable. But without fundamental reform in the way that peer review is carried out, the push for pre-prints will not succeed. An important additional goal for the meeting must therefore be for funders to endorse alternative mechanisms for carrying out peer review. Such mechanisms would operate outside of the traditional journal-based system and focus on assessing the quality, audience, and impact of work published exclusively as “pre-prints”. If structured properly, we anticipate that a new system of pre-print publishing coupled with post-publication peer review will replace traditional scientific publishing much as online user-driven reviews (Amazon, Yelp, Trip Advisor, etc.) have replaced publisher-driven metrics to assess quality (Consumer Reports, Zagat, Fodor’s, etc.).

In this white paper we explain why the adoption of pre-prints and peer review reform are inseparable, outline possible alternative peer review systems, and suggest concrete steps that research funders can take to leverage changes in peer review to successfully promote the adoption of pre-prints.

Pre-prints and journal-based peer review can not coexist

The essay by Ron Vale that led to the ASAP Bio meeting is premised on the idea that we should use pre-prints to augment the existing, journal-based system for peer review. In Vale’s model, biomedical researchers would post papers on pre-print servers and then submit them to traditional journals, which would review them as they do today, and ultimately publish those works they deem suitable for their journal.

There are many reasons why such a system would be undesirable – it would leave intact a journal system that is inefficient, ineffective, inaccessible, and expensive. But more proximally, there is simply no way for such a symbiosis between pre-prints and the existing journal system to work.

Pre-print servers for biomedicine, such as BioRxiv, run by the well-respected Cold Spring Harbor Press, now offer biomedical researchers the option to publish their papers immediately, at minimal cost. Yet biologists have been reluctant to make use of this opportunity because they have no incentive to do so, and in many cases have incentives not to. If we as a biomedical community want to promote the universal adoption of pre-prints, we have to do more than pay lip-service to the potential of pre-prints, we have to change the incentives that drive publishing decisions. And this means changing peer review.

Why are pre-prints and peer review linked? Scientists publish for two reasons: to communicate their work to their colleagues, and to get credit for it in hiring, promotion and funding. If publishing behavior were primarily driven by a desire to communicate, biomedical scientists would leap at the opportunity to post pre-prints, which make their work available to the widest possible audience at the earliest possible time at virtually no cost. That they do not underscores the reality that, for most biomedical researchers, decisions about how they publish are driven almost entirely by the impact of these decisions on their careers.

Pre-prints will be not be embraced by biomedical scientists until we stop treating them as “pre” anything, which suggests that a better “real” version is yet to come. Instead, pre-prints need to be accepted as formally published works. This can only happen if we first create and embrace systems to evaluate the quality and impact of, and appropriate audience for, these already published works.

But even if we are wrong, and pre-prints become the norm, we would still need to create an alternative to journal based peer review. If all, or even most, papers are available for free online, it is all but certain that libraries would begin to cut subscriptions and traditional journal publishing, which still relies almost exclusively on revenue from subscriptions, would no longer be economically viable.

Thus a belief in the importance of pre-print use in biomedicine requires the creation of an alternative system for assessing papers. We therefore suggest that the most important act for funders, universities, and other stakeholders is not to just endorse the use of pre-prints in biomedicine, but to endorse the development and use of a viable alternative to journal titles in the assessment of the quality, impact, and audience of works published exclusively as “pre-prints”.                                                                                                

Peer review for the Internet Age

The current journal-based peer review system attempts to assure the quality of published works; help readers find articles of import and interest to them; and assign value to individual works and the researchers who created them. Post-publication peer review of works initially published as pre-prints can not only replicate these services, but do it faster, cheaper and more effectively.

The primary justification for carrying out peer review prior to publication is that this prevents flawed works from seeing the light of day. Inviting a panel of two or three experts to assess the methods, reasoning, and presentation of the science in the paper, undoubtedly leads to many flaws being identified and corrected.

But any practicing scientist can easily point to deeply flawed papers that have made it through peer review in their field, even in supposedly high-profile journals. Yet even when flaws are identified, it rarely matters. In a world where journal title is the accepted currency of quality, a deeply flawed Science or Nature paper is still a Science or Nature paper.

Prepublication review was developed and optimized for printed journals, where space had to be rationed to balance the expensive acts of printing and shipping a journal. But today it is absurd to rely solely on the opinions of two or three reviewers, who may or may not be the best qualified to assess a paper, who often did not want to read the paper in the first place, who are acting under intense time pressure, and who are casting judgment at a fixed point in time, to be to sole arbiters of the validity and value of a work. Post-publication peer review of pre-prints is scientific peer review optimized for the Internet Age.

Beginning to experiment with systems for post-publication review now will hasten its development and acceptance, and is the quickest path to the universal posting of pre-prints. In the spirit of experimentation, we propose a possible system below.

A system for post-publication peer review

First, authors would publish un-reviewed papers on pre-print servers that screen them to remove spam and papers that fail to meet technical and ethical specifications, before making them freely available online. At this point peer review begins, proceeding along two parallel tracks.

Track 1: Organized review in which groups, such as scientific societies or self-assembling sets of researchers, representing fields or areas of interest arrange for the review of papers they believe to be relevant to researchers in their field. They could either directly solicit reviewers or invite members of their group to submit reviews, and would publish the results of these reviews in a standardized format. These groups would be evaluated by a coalition of funding agencies, libraries, universities, and other parties according to a set of commonly agreed upon standards, akin to the screening that is done for traditional journals at PubMed.

Track 2: Individually submitted reviews from anyone who has read the paper. These reviews would use the same format as organized reviews, and would, like organized reviews become part of the permanent record of the paper. Ideally, we want everyone who reads a paper carefully to offer their view of its validity, audience, and impact. To ensure that the system is not corrupted, individually submitted reviews would be screened for appropriateness, conflicts of interest, and other problems, and there would be mechanisms to adjudicate complaints about submitted reviews.

Authors would have the ability at any time to respond to reviews and to submit revised versions of their manuscript.

Such a system has many immediate advantages over our current system of pre-publication peer review. The amount of scrutiny a paper receives will scale with the level of interest in the paper. If a paper is read by thousands of people, many more than the three reviewers chosen by a journal are in a position to weigh in on its validity, audience, and importance. Instead of only evaluating papers at a single fixed point in time, the process of peer review would continue for the useful lifespan of the paper.

What about concerns about anonymity for reviewers? We believe that peer review works best when it is completely open and reviewers are identified. This both provides a disincentive to various forms of abuse, and allows readers to put the review in perspective. We also recognize that there are many scientists who would not feel comfortable expressing their honest opinions without the protection of anonymity. We therefore propose that reviews be allowed to remain anonymous provided that one of the groups defined in Track 1 above vouch for their lack of conflict and appropriate expertise. This strikes the right balance between providing anonymity to reviewers while protecting authors from anonymous attacks.

What about the concern of flawed papers being published, or being subject to misuse and misinterpretation while they are being reviewed? We do not consider this to be a serious problem. The people in the best position to make use of immediate access to published papers – practicing scientists in the field of the paper – are in the best position to judge the validity of the work themselves and to share their impressions with others. Readers who want external assessment of the quality of a work can wait until it comes in, and are those no worse off than they are in the current system. If implemented properly, such a system would get the best of both worlds – rapid access for those who want and need it, and quality control over time for a wider audience.

Assessing quality and audience without journal names

The primary reason the traditional journal-based peer review system persists despite its anachronistic nature is that the title of the journal in which a scientific paper appears reflects the reviewers’ assessment of the appropriate audience for the paper and their valuation of its contributions to science. There is obviously value in having people who read papers judge their potential audience and impact, and there are many circumstances where having an external assessment of a scientist’s work can be of use. But there is no reason we have to use journal titles to convey this information.

It would be relatively simple to give reviewers of published pre-prints a set of tools to specify the most appropriate audience for the paper, to anticipate their expected level of interest in the work, and to gauge the impact of the work. We can also take advantage of various automated methods to suggest papers to readers, and for such readers to rate the quality of paper by a set of useful metrics. Systems that use the Internet to harness collective expertise have fundamentally changed nearly every other area human society – it’s time for them to do the same for science.

Actions

A commitment to promoting pre-prints in biomedicine requires a commitment to promoting a new system for reviewing works published initially as un-reviewed pre-prints. Such systems are practical and a dramatic improvement over the current system. We call on funders and other stakeholders to endorse the universal posting of pre-prints and post-publication peer review as inseparable steps that would dramatically improve the way scientists communicate their ideas and discoveries. We recognize that such a system requires standards, and propose that a major outcome of the ASAP Bio meeting be the creation of an “International Peer Review Standards Organization” to work with funders and other stakeholders to establish these criteria and to work through many of the important issues, and then serve as a sanctioning body for groups of reviewers who wish to participate in this system. We are prepared to take the lead in assembling an international group of leading scientist to launch such an organization.

Posted in open access | Comments closed

The current system of scholarly publishing is the real infringement of academic freedom

Rick Anderson has a piece on “Open Access and Academic Freedom” at Inside Higher Ed arguing the open access policies being put into place by many research funders and some universities that require authors to make their work available under open licenses (most commonly Creative Commons’ CC-BY) are a violation of academic freedom and should be viewed with skepticism.

Here is the basic crux of his argument:

The meaningful right that the law provides the copyright holder is the exclusive (though limited) right to say how, whether, and by whom these things may be done with his work by others.

So the question is not whether I can, for example, republish or sell copies of my work under CC BY — of course I can. The question is whether I have any say in whether someone else republishes or sells copies of my work — and under CC BY, I don’t.

This is where it becomes clear that requiring authors to adopt CC BY has a bearing on academic freedom, if we assume that academic freedom includes the right to have some say as to how, where, whether, and by whom one’s work is published. This right is precisely what is lost under CC BY. To respond to the question “should authors be compelled to choose CC BY?” with the answer “authors have nothing to fear from CC BY” or “authors benefit from CC BY” is to avoid answering it. The question is not about whether CC BY does good things; the question is whether authors ought to have the right to choose something other than CC BY.

Although for reasons I outline below I disagree with Anderson’s conclusion that concerns about academic freedom should trump the push for greater access, the point bears some consideration, especially because he is far from the only one raising it.

But what actually is this “academic freedom” we are talking about?  I will admit that, even though I am a long-time academic, and have a general sense of what academic freedom is, when I first started hearing this complaint about open access mandates, I didn’t really understand what the term “academic freedom” actually means. And part of the problem is that there isn’t really a thing called “academic freedom”.

The Wikipedia definition pretty much captures the concept:

Academic freedom is the belief that the freedom of inquiry by faculty members is essential to the mission of the academy as well as the principles of academia, and that scholars should have freedom to teach or communicate ideas or facts (including those that are inconvenient to external political groups or to authorities) without being targeted for repression, job loss, or imprisonment.

But this broad concept lacks a unified concrete reality. Anderson cites as his evidence that CC-BY mandates violate academic freedom the following passage from the widely-cited “1940 Statement of Principles on Academic Freedom and Tenure” from the American Association of University Professors:

Teachers are entitled to full freedom in research and in the publication of the results, subject to the adequate performance of their other academic duties; but research for pecuniary return should be based upon an understanding with the authorities of the institution.

Note that while this document provides a definition of academic freedom that has been fairly widely accepted, it is not in any way legally binding nor, more importantly, does it reflect a universal consensus about what academic freedom is. Nonetheless, it’s hard not to get behind the general principle that academics should have the “freedom to publish”. However, it is by no means clear what this actually entails.

Virtually everything I have ever read about academic freedom starts with the importance of giving academics the freedom to express the results of their scholarship irrespective of their specific conclusions. We grant them tenure in large part to protect this freedom, and I know of no academic who would sanction their employer telling them that they can not publish something they wish to publish.

But imposing a requirement that academics employ a CC-BY license does not impose a restriction on the content of their publication, but rather imposes a limit on venues available for publication (and it’s important for open access supporters to acknowledge this – there exist journals today that would not accept papers that were available online elsewhere, with or without a CC-BY license). But I’m not sure this constitutes a limit on academic freedom?

Clearly some restrictions on venues would have the effect of restricting authors’ ability to communicate their work. If a university told its academics that they could only publish in venues that appeared exclusively in print, they would unambiguously limit their ability to communicate and we would not sanction it. But what if they required that all works be available online to facilitate assessment and access for students? This would also impose some limits on where they could publish, but, in the current online-heavy universe, this would not be a meaningful limit on the authors’ ability to communicate.

So it seems to me that we have to make a choice. Approach 1 would be to evaluate such conditions on a case by case basis to determine if the limitations placed on authors actually limit academic freedom.  Approach 2 would be to enshrine the principle that any conditions placed on how or where academics publish by universities and funders are unacceptable.

If we take the case-by-case approach, we have to ask if the specific requirement that authors make their work available under a CC-BY license constitutes an infringement of their freedom to communicate their work. It certainly imposes some limits on where they can publish, but, given the wide diversity of journals that don’t prohibit pre-prints, it’s hard to describe this as a significant infringement.

The second issue raised by Anderson, that by requiring CC-BY and thereby granting others the right to reuse and republish a work without author permission you are depriving authors of the right to control how their work is used. I am a bit sympathetic to this point of view. But in reality authors have actually already lost an element of this control, as the fair use component of copyright law grants others the right to use published works in certain ways without author permission (to write reviews of the work, for example), so it’s hard to see this as a major intrusion.

Which brings me to one of my main points. Anderson argues that the principle of “freedom to publish” should be sacrosanct. But it clearly is not. While scholars my have the theoretical ability to publish their work wherever they want to, in reality the hiring, promotion, tenure and funding policies of universities and funding agencies impose a major constraint on how and where academics publish. Scientists are expected to publish in certain journals, other academics are expected to publish books with certain publishers. Large parts of the academic enterprise are currently premised on restricting the freedom of academics to publish where and how they want. In comparison to these restrictions – which manifest themselves on a daily basis – the added imposition of requiring a CC-BY license seems insignificant.

Furthermore, one has to view the push for CC-BY licenses in a broader context in which they are part of an effort to alter the ecology of scholarly publishing so that authors are not judged by their publication in a narrow group of journals or with a narrow group of university presses. Thus I would argue that, viewed practically, the shift to CC-BY would actually promote academic freedom and the freedom of authors to publish how and where they want.

One could reasonably respond that it’s not my place to decide on behalf of other scholars what does and does not constitute an imposition of their academic freedom. Which brings us to approach 2, enshrining the principle that any conditions placed on how or where academics publish by universities and funders are unacceptable. If you hold this position then you will clearly view a mandatory CC-BY policy as an unacceptable imposition of academic freedom. But you would then also have to see the hiring, promotion, tenure and funding policies that push authors to certain venues as an even bigger betrayal of academic freedom. I am happy to completely embrace this point of view.

In the end, I didn’t find Anderson’s article as repugnant as many of my open access friends did. Academic freedom is important, and it should be defended. And the points he raised are interesting and important to consider. But I take exception with Anderson’s focus on the supposed negative effects of the use of a CC-BY license on academic freedom, when, if we are serious about defending academic freedom we should instead be looking at how the entire system of scholarly publishing limits it. Indeed, I have now been inspired by Anderson’s article to make academic freedom a major lynchpin of my future arguments in favor of fundamental reform of scholarly publishing.

 

Posted in academic freedom, open access, public access, science | Comments closed

Vegan Thanksgiving Picnic Pie Recipe

I posted some pictures of this Thanksgiving themes picnic pie (completely vegan) on Twitter and Facebook.


IMG_3104

A bunch of people asked me for my recipe. Unfortunately, it was almost completely improvised, so I don’t have a recipe. But here is roughly what I did.

First of all, a few weeks ago I had no idea what a picnic pie is. But then I was randomly channel surfing and came upon a show called “The Great British Bake Off” in which three people were competing in various baking challenges – the final one of which was making a “Picnic Basket Pie” – which is basically a bread pan lined with pastry dough that is filled with layers of various things (meat, cheese, veggies, etc…), baked, and then sliced into slabs that show off the layers.

I liked the concept, and so as I started to think about what to cook for Thanksgiving (as a vegan going to non-vegan houses I’m always forced to cook my own meal) it occurred to me to make a Thanksgiving themes picnic pie with layers like mashed potatoes, stuffing, cranberry sauce, etc…

I started with one of the recipes from the show by the one contestant who made at least a vegetarian pie. The only thing I used was the recipe for the dough, which is basically just normal pastry dough with a bit of baking powder added (not sure why).

Dough

600g (~4 cups) of all purpose flour
3/4 cup (3 sticks) of unsalted margarine or shortening
1/2 tsp salt
1/2 tsp baking powder

Cut the margarine into the dough with fingers, fork or pastry mixer. Add ~150ml of water and form into ball and place into fridge for at least an hour. When ready to form take out of fridge and let sit for 15m to warm up.

Roll out ~2/3 of dough into shape that it will fit into a high sided bread pan (mine is around 8″ x 4″ x 4″). Cut a piece of parchment paper about 6″ wide and long enough to go under the dough in the dish with ends sticking out as handles (you’re going to use this to take the pie out of the dish). Then carefully fit the dough into the pan. Make sure it is intact with no holes.

Fillings

The key thing for each of these layers is that they be relatively dry so that they won’t leak out moisture and ruin the structural integrity of the crust. I mostly made these up on the fly, but here is roughly what I did.

Layers from bottom to top:

Polenta: I started by spreading a layer of dried, uncooked polenta on the bottom. This was to represent traditional Thanksgiving corn, but also to absorb excess moisture. Although I was careful not to have wet layers, I figured there would be enough water to cook the polenta as I baked the pie. But this turned out not to be correct. So if I do this again, I’ll cook the polenta first.

Greens: Sliced a leak and sautéed in olive oil with ~1 Tbs crushed roasted garlic. When done roughly chopped two bunches of swiss chard and added to pan, cooking until wilted. I then pressed as much of the water as I could out of the greens in a strainer. Added on top of polenta.

IMG_3087

Sweet Potatoes: Sliced a large Beauregard yam into ~3/4 slices and then quartered them. Put them into a baking dish with a layer of olive oil. Sprinkled with brown sugar and then baked ~20m at 400F until soft. Added on top of chard trying hard to pack densely.

IMG_3088

Stuffing: Sliced an onion and a stalk of celery. Cooked in olive oil until softened. Added about 2 or 3 cups of sliced brown mushrooms and cooked until soft. I then added bread crumbs until fairly dry. Added salt to taste. Added on top of sweet potatoes.

IMG_3089

Mashed potatoes: Peeled and diced ~6 russet potatoes. Boiled until soft. Mashed with potato masher. Added margarine and salt to taste. Layered on top of stuffing.

IMG_3090

Cranberry sauce: Started with directions on back of bag. Boiled two cups sugar in two cups water. Added two 12oz. bags of cranberries. Simmered on medium for at least an hour (probably more) until berries soft and starting to pop. Crushed them with potato masher. Then strained through fine strainer. Set the flow through aside (this is a good cranberry sauce for kids) and added the now relatively not so wet and somewhat sweetened cranberries.

IMG_3092

Top

Made a lattice top by making four long slices ~1″ wide and then weaving shorter pieces along the short axis. Pinched edges together.

IMG_3095

Baking

Baked 50 minutes at 400F. Let cool for a while. I served it cold, but I think it was better when I reheated it, so if you make this I might try serving it 30m or so after cooking.

Impression

Overall I thought this came out really well. It held together perfectly – didn’t get moisture screwing up the dough. And the flavors went well together. I’m definitely going to make things like this again.

 

Posted in Uncategorized | Comments closed

You Have Died Of Peer Review

 

 

 

I’ve been feeling the need for some new publishing related t-shirts, and somehow this idea popped into my head.

 

You Have Died of Peer Review

For those of you who don’t know, it’s based on the popular 80’s computer game Oregon Trail, where games would often end with the alert that “You Have Died Of Dysentery”

I made it into t-shirts, as one does, which you can get here.

You Have Died Of Peer Review

Posted in Uncategorized | Comments closed

The New York Times’ serial open access slimer Gina Kolata has a clear conflict of interest

Yesterday the Gina Kolata published a story in the New York Times about the fact that many clinical studies are not published. This is a serious problem and it’s a good thing that it is being brought to light.

But her article contains a weird section in which a researcher at the University of Florida explains why she hadn’t published the results of one of her studies:

Rhonda Cooper-DeHoff, for example, an assistant professor of pharmacotherapy and translational research at the University of Florida, tried to publish the results of her study, which she completed in 2009. She wrote a paper and sent it to three journals, all of which summarily rejected it, she said.

The study, involving just two dozen people, asked if various high blood pressure drugs worsened sugar metabolism in people at high risk of diabetes.

“It was a small study and our hypothesis was not proven,” Dr. Cooper-DeHoff said. “That’s like three strikes against me for publication.” Her only option, she reasoned, would be to turn to an open-access journal that charges authors to publish. “They are superexpensive and accept everything,” she said. Last year she decided to post her results on clinicaltrials.gov.

Why is that sentence in there? First, it’s completely false. There are superexpensive open access journals, and there are open access journals that accept everything. But I don’t know of any open access journal that does both, and neither statement applies to the journals  (from PLOSBMC, Frontiers, eLife and others) that publish most open access papers.

Is the point of that sentence supposed to be that there are journals that will publish anything, including a massively underpowered clinical study, but they’re too expensive to publish in? That would fit the narrative Kolata is trying to develop – that people don’t publish negative results because it’s too hard to – but this too is completely false. Compared to the cost of doing a clinical trial, even a small one, the article processing fees for most open access journals are modest, and most offer waivers to those who can not pay.

It may seem like a minor thing, but these kind of things matter. There are a lot of misconceptions about open access publishing among scientists and the public, and when the paper of record repeats these misconceptions it compounds the problem.

So why does something like this get into the paper? I assume the quoted researcher said that, or something like it. But newspapers aren’t just supposed to let people they quote say things that are patently false without pointing that out.

 

Kolata has been covering science for covering science for all of the 15 years that open access publishing has been around, and used to work for Science magazine. So it’s just simply not credible to believe that she thinks this assertion about open access is true. Instead, it sure looks like she quoted a source making a false and misleading statement about open access to stand without countering it because it fit her narrative of people not being able to publish their findings.

 

So, after reading this article I made a few Tweets about this, and would have let it go at that. But then I remembered something. A few years ago, Kolata published a story about  “predatory open access publishers”, in which Kolata characterized such publishers as the “dark side of open access”.

I wrote about this story at the time, and won’t repeat myself here, but suffice it to say that her article went out of its way to condemn all open access publishing because of some bad actors working at its fringes, while ignoring the far more significant sins of subscription publishing.

Sensing a bit of a pattern, I searched to see if she’d ever written other things about open access, and came upon a 2010 article on Amy Bishop, the Alabama who scientist who shot and killed three of her colleagues at the University of Alabama Huntsville, contains this bizarre paragraph on open access:

One 2009 paper, was published in The International Journal of General Medicine. Its publisher, Dovepress, says it specializes in “open access peer-reviewed journals.” On its Web site, the company says, “Dove will publish your paper if it is deemed of interest to someone, therefore your chance of having your paper accepted is very high (it would be a very unusual paper that wasn’t of interest to someone).”

What is the point of bringing open access into a story about whether a murderer did good science? Did Kolata go through her published papers and evaluate each of the journals in which it was published and offer up some kind of synthesis? No. She cherry picked a single article published in an open access journal and, instead of criticizing the science, she made it about the journal and its method of publication. This paragraph seems to be there just to knock open access publishing and to associate publishing in open access journals with being a murderer!

If it was just once, or maybe even twice, I’d just chalk it up to bad reporting or writing. But three separate gratuitous attacks on open access seems like more than a coincidence for someone who has had such a long and distinguished career around science.

It wouldn’t be the first time that members of the science establishment (and the science section of the New York Times is amongst the biggest bulwarks of the science establishment) have taken pot shots at open access and open access journals. But I was curious why Kolata seems to make such a habit of it, and so I went back to her Wikipedia page to find out when she had worked at Science, to see if maybe she had been poisoned by their long history of anti open-access rhetoric. Turns out is was 1973-1987, before open access came along.

But I noticed the following line in her biography:

Her husband, William G. Kolata, has taught mathematics and served as the technical director of the non-profit Society for Industrial and Applied Mathematics in Philadelphia, a nonprofit professional society for mathematicians.

SIAM, it so happens, is a fairly big publisher, with, according to their IRS Form 990, annual subscription revenues of around $6,000,000 (and another $1,000,000 in membership dues, which, for many societies, are often just another way to subscribe to a journal). Now as publishers go, SIAM hasn’t been particularly anti open-access, and their journals engage in so-called “hybrid” open access in which they’ll let you pay an extra fee to make articles freely available (enabling the publisher to double dip by collecting both open access fees and subscriptions, since only a small number of authors choose the open access option).

But given that the ~$125,000 per year that Kolata’s husband makes from SIAM is threatened by changes to scholarly publishing, including open access, it would seem that Kolata has at least a mild conflict of interest here in trying to prop up the subscription publishing industry and in denigrating new models and new players in the industry.

At the very least the fact that, in addition to her own lengthy career in science publishing and science journalism, Kolata’s husband has been involved in running a scientific society that is primarily involved in publishing, makes it seem highly unlikely that her digs at open access are born of ignorance. And whether her motivation is to prop up the dying industry in which her husband just happens to be employed, or if she’s just on some kind of weird petty vendetta, we should watch carefully when Kolata writes about open access in the future and not let her get away this kind of sliming any more.

Posted in open access | Comments closed

What Geoffrey Marcy did was abominable; What Berkeley didn’t do was worse

I am so disappointed and revolted with my university.

On Friday,  posted a story about Geoffrey Marcy, a high-profile professor in UC Berkeley’s astronomy department. It reported on a a complaint filed by four women to Berkeley’s Office for the Prevention of Harassment and Discrimination (OPHD) that alleged that Marcy “repeatedly engaged in inappropriate physical behavior with students, including unwanted massages, kisses, and groping.”

Unusually for this type of investigation, the results of which are usually kept secret, Ghorayshi’s reporting revealed that OPHD found Marcy guilty of these charges, leading to his issuing a public apology in which he, in all too typical PR driven apology speak, acknowledges doing things that “unintentionally” was “a source of distress for any of my women colleagues”.

There’s not much to say about his actions except to say that they are despicable, predatory, destructive and all too typical. It defies even the most extreme sense of credulity to believe that he thought what he was doing was appropriate.

But, unlike so many other cases of alleged harassment that go unreported, or end in a haze of accusations and denials, the system worked in this case. An investigation was carried out, the charges were substantiated, the bravery of the women who came forward was vindicated, and Marcy was removed from the position of authority he had been abusing.

WAIT WHAT? He got a firm talking to and promised never to do it again????? THAT’S IT???

It is simply incomprehensible that Marcy was not sanctioned in any way and that, were it not for Ghorayshi’s work we wouldn’t even know anything about this. How on Earth can this be true? Does the university not realize they are giving other people in a position of power a license to engage in harassment and abusive behavior? Do they think that the threat of having to say “oops, I won’t do that again” is going to stop anyone? Do they think anyone is going to file complaints about sexual harassment or abuse and go through what everyone described as an awful, awful process, so that their abuser will get a faint slap on the wrist? Do they care at all?

Sadly, I think the answer to the last question is “No”.

As I was absorbing this, I was reflecting on having just completed the state mandated two hour online course on sexual harassment. First of all, Marcy is required to have taken this course. If he had paid any attention (and didn’t have someone else take it for him), he would have no excuse for not being aware of how inappropriate and awful his actions were.

But I also realized something more fundamental –  at no point during all the scenarios with goofily named participants, flowcharts of reporting procedures and discussions of legal requirements was there anything about sanctions.

When you study to get a drivers license, the learn not just about the laws of the road, but about what happens if you violate them. And while most of us want to drive safely, it is the threat of sanctions that prevents us from speeding, running red lights and the such. Why no discussion of sanctions regarding actions that are not just violations of university policy, but are, in many cases, crimes?

I am all in favor of education about sexual harassment. But isn’t the fact that this kind of shit keeps happening over and over evidence that education is not enough? There HAVE to be consequences – serious consequences – for abusing positions of power. Do we honestly think that someone who likes to stick his hand up the shirts of his students and give them back rubs is going to be dissuaded from doing so because he (yes, it’s pretty much always he) is going to go back over the “Determining whether conduct is welcome” checklist in his mind? Do we think someone who wants to inappropriately touch students at dinner is going to stop because of some scenario he clicked through?

I’m not trying to argue against this kind of eduction. It is vital. But it is mostly aimed at helping people recognize harassment as a third party. It seems aimed more at supervisors to teach them how to respond to harassment in their midst, and it seems more interested in parsing marginal cases than in saying “DON’T TOUCH YOUR STUDENTS’ and ‘DON’T ABUSE YOUR POSITION OF POWER’.

Here is a perfect example:

Dr. Risktaker

I’m sure male faculty all imagine themselves as the debonair professor who poor female students can’t help having the hots for. But it’s bullshit. The case we have to worry about is exactly the opposite – the one we know happens all the time – where “Randy Risktaker” has the hots for “Suzie Scholar” and uses his position of power over her to impose himself on her.

[And can we talk about names here for a second? Randy Risktaker and Suzie Scholar seem straight out of porn. Is that really the message we want to be sending here? Don’t you think the Geoffrey Marcys of the world read that and go — ooh, I AM a randy risktaker…]

And how does the university respond to this scenario?

Dumb Answers

First, they want to remind us that students CAN harass professors, creating a bizarre false equivalence and ignoring the obvious difference in position and power. Second, and far more importantly, they don’t say what they should say which is HEY DR. RISKTAKER, KEEP IT IN YOUR PANTS AND GO BACK TO TEACHING.

Instead they all but give him permission to pursue the relationship, and give him a step-by-step guide of how to do it: call the sexual harassment officer to discuss the matter (right, like anyone’s going to do that) and then tell her you can no longer be her dissertation advisor anymore because you’d rather sleep with her than advise her academically. I’m sure Geoff Marcy Randy Risktaker is grateful for the guidance.

This isn’t education. This is repulsive.

I get it, university policy does not preclude relationships between faculty and students, it just defines the conditions under which they can happen. But the purpose of training should be to PREVENT HARASSMENT, not to tell people how to comply with university policies.

Which gets to the heart of the matter. The university does not  care about preventing harassment – it cares about covering its ass when harassment occurs. This training – the only real communication faculty get about the matter – is ALL about that. And this has to change. NOW.

All over Berkeley campus there are banners with various people – students, teachers, administrators – saying “It’s on me” to prevent sexual violence on campus and the rape culture that plagues universities everywhere.

Well the behavior Marcy engaged in is sexual violence. And, as a senior university faculty, it’s on me to demand that the university fix this problem immediately.

I am calling on Chancellor Dirks to completely revamp the training faculty and other supervisors receive on sexual harassment to focus primarily on the rampant unacceptable behavior that happens all the time, and to make it unambiguously clear that if faculty engage in this behavior they will receive serious sanctions, including the loss of their position. This is what we owe to the brave women who confronted Marcy, and to tall the people who we can protect from abuse if we act now.

Posted in Uncategorized | Tagged | Comments closed