Elsevier is tricking authors into surrendering their rights

A recent post on the GOAL mailing list by Heather Morrison alerted me to the following sneaky aspect of Elsevier’s “open access” publishing practices.

To put it simply, Elsevier have distorted the widely recognized concept of open access, in which authors retain copyright in their work and give others permission to reuse it, and where publishers are a vehicle authors use to distribute their work, into “Elsevier access” in which Elsevier, and not authors, retain all rights not granted by the license. As a result, despite highlighting the “fact” that authors retain copyright, they have ceded all decisions about how their work is used, if and when to pursue legal action for misuse of their work and, crucially, if they use a non-commercial license they are making Elsevier is the sole beneficiary of commercial reuse of their “open access” content.

For some historical context, when PLOS and BioMed Central launched open access journals over a decade ago, they adopted the use of Creative Commons licenses in which authors retain copyright in their work, but grant in advance the right for others to republish and use that work subject to restrictions that differ according to the license used. PLOS and BMC and most true open access publishers use the CC-BY license, whose only condition is that any reuse must be accompanied by proper attribution.

When PLOS, BioMed Central and other true open access publishers began to enjoy financial success, established subscription publishers like Elsevier began to see a business opportunity in open access publishing, and began offering a variety of “open access” options, where authors pay an article-processing charge in order to make their work available under one of several licenses. The license choices at Elsevier include CC-BY, but also CC-BY-NC (which does not allow commercial reuse) and a bespoke Elsevier license that is even more limiting (nobody else can reuse or redistribute these works).

At PLOS, authors do not need to transfer any rights to the publisher, since the agreement of authors to license their work under CC-BY grants PLOS (and anyone else) all the rights they need to publish the work. However, this is not true with more restrictive licenses like CC-BY-NC, which, by itself, does not give Elsevier the right to publish works. Thus, Elsevier if either CC-BY-NC or Elsevier’s own license are used, the authors have to grant publishing rights to Elsevier.

However, as Morrison points out, the publishing agreement that Elsevier open access authors sign is far more restrictive. Instead of just granting Elsevier the right to publish their work:

Authors sign an exclusive license agreement, where authors have copyright but license exclusive rights in their article to the publisher**. 

**This includes the right for the publisher to make and authorize commercial use, please see “Rights granted to Elsevier” for more details.

(Text from Elsevier’s page on Copyright).

This is not a subtle distinction. Elsevier and other publishers that offer it routinely push CC-BY-NC to authors under the premise that they don’t want to allow people to use their work for commercial purposes without their permission. Normally this would be the case with a work licensed under CC-BY-NC. But because exclusive rights to publish works licensed with CC-BY-NC are transferred to Elsevier, the company, and not the authors, are the ones who determine what commercial reuse is permissible. And, of course, it is Elsevier who profit from granting these rights.

It’s bad enough that Elsevier plays on misplaced fears of commercial reuse to convince authors not to grant the right to commercial reuse, which violates the spirit and goals of open access. But to convince people that they should retain the right to veto commercial reuses of their work, and then seize all those rights for themselves, is despicable.

 

Posted in open access | Comments closed

The Imprinter of All Maladies

Any sufficiently convoluted explanation for biological phenomena is indistinguishable from epigenetics.

Epigenetics

Use of the word “epigenetics” over time

Epigenetics is everywhere. Nary a day goes by without some news story or press release telling us something it explains.

Why does autism run in families?  Epigenetics.
Why do you have trouble losing weight? Epigenetics.
Why are vaccines dangerous? Epigenetics.
Why is cancer so hard to fight? Epigenetics.
Why a cure for cancer is around the corner? Epigenetics.
Why your parenting choices might affect your great-grandchildren? Epigenetics.

Epigenetics is used as shorthand in the popular press for any of a loosely connected set of phenomenon purported to result in experience being imprinted in DNA and transmitted across time and generations. Its place in our lexicon has grown as biochemical discoveries have given ideas of extra-genetic inheritance an air of molecular plausibility.

Biologists now invoke epigenetics to explain all manner of observations that lie outside their current ken. Epigenetics pops up frequently among non-scientists in all manner of discussions about heredity. And all manner of crackpots slap “epigenetics” on their fringy ideas to give them a veneer of credibility. But epigenetics has achieved buzzword status far faster and to a far larger extent than current science justifies, earning the disdain of scientists (like me) who study how information is encoded, transferred and read out across cellular and organismal generations.

This simmering conflict came to a head last week around an article in The New Yorker, Same but Different” by Siddhartha Mukherjee that juxtaposed a meditation on the differences between his mother and her identical twin with a discussion of the research of Rockefeller University’s David Allis on the biochemistry of DNA and the proteins that encapsulate it in cells, that he and others believe provides a second mechanism for the encoding and transmission of genetic information.

Although Mukherjee hedges throughout his piece, the clear implication of the story is that Allis’s work provides an explanation for differences that arise between genetically identical individuals, and even suggests that they open the door to legitimizing the long-discredited ideas of the 19th century naturalist Jean-Baptiste Lamarck who thought that organisms could pass beneficial traits acquired during their lifetimes on to their offspring.

The piece earned a sharp rebuke from many scientists, most notably Mark Ptashne who has long led the anti-epigenetics camp, and John Greally, who published a lengthy take-down of Mukherjee’s piece on the blog of evolutionary biologist Jerry Coyne.

The dispute centers on the process of gene regulation, wherein the levels of specific sets of genes are tuned to confer distinct properties on different sets of cells and tissues during development, and in response to internal and external stimuli. Gene regulation is central to the encoding of organismal form and function in DNA, as it allows different cells and even different individuals of a species to have identical DNA and yet manifest different phenotypes.

Ptashne has studied the molecular basis for gene regulation for fifty years. His and Greally’s critique of Mukherjee, or really Allis, is rather technical, and one could quibble about some of the specifics. But his main points are simple and difficult to refute:

  • There is essentially no evidence to support the idea that chemical modification of DNA and/or its accompanying proteins is used to encode and transmit information over long periods of time.
  • Rather than representing a separate system for storing and conveying information, a wide range of experiments suggests that the primary role of the biochemistry in question is to execute gene expression programs encoded in DNA and read out by a diverse set of proteins known as transcription factors that bind to specific sequences in DNA and regulate the expression of nearby genes.

In one way this debate is incredibly important because it is ultimately about getting the science right. Mukherjee’s piece contained several inaccurate statements and, by focusing on one aspect of Allis’s work, gave an woefully incomplete picture of our current understanding of gene regulation.

Any system for conveying information about the genome – which is what Mukherjee is writing about – has to have some way to achieve genomic specificity so that the expression of genes can be tuned up or down in a non-random manner. Transcription factors, which bind on to specific DNA sequences, provide a link between the specific sequence of DNA and the cellular machines responsible for turning information in DNA into proteins and other biomolecules. Small RNAs, which can bind to complementary sequences in DNA, also have this capacity.

But there is scant evidence for sequence specificity in the activities of the proteins that modify DNA and the nucleosomes around which it is wrapped. Rather they get their specificity from transcription factors and small RNAs. That doesn’t render this biochemistry unimportant – the broad conservation of proteins involved in modifying histones shows they play important roles – but ascribing regulatory primacy to DNA methylation and histone modifications is not consistent with our current understanding of gene regulation.

Something is, however, getting lost in this back-and-forth , as one might come away with the impression that this is disagreement about whether cells and organisms can transmit information in a manner above and beyond DNA sequence. And this is unfortunate, because there really is no question about this. Ptashne and Allis/Mukherjee are arguing about the molecular details of how it happens and about how important different phenomena are.

Various forms of non-Mendelian information transfer are well established. The most important of which happens in every animal generation, as eggs contain not only DNA from the mother, but also a wide range of proteins, RNAs and small molecules that drive the earliest stages of embryonic development. The particular cocktail left by the mother can have profound effects on the new organism – so called “maternal effects”. These effects can be the result of both the mothers genotype, the environment in which she lives, and, in various ways, her experiences during her life. (Such phenomena are not limited to multicellular critters – single-celled organisms distribute many molecules asymmetrically when they divide, conferring different phenotypes to their different genetically identical offspring).

Many maternal effects have been studied in great detail, and in most cases the transmission of state involves the transmission of different concentrations and activities of proteins (including transcription factors) and RNAs. That is the transmitted DNA is identical, but the state of the machinery that reads out the DNA is different, resulting in different outcomes.

However there are some good examples in which modifications to DNA play an important role in the transmission of information across generations – most notably with “imprinting”, in which an organism preferentially utilizes the copy of a gene it got from one of its parents independent to the exclusion of the other in a way that appears to be independent of the sequence of the gene. Imprinting, which is a relatively rare, but sometimes important, phenomenon appears to arise from parent-specific methylation of DNA.

Could the histone modifications that Allis studies and Mukherjee focuses on also carry information across cell divisions and generations? Sure. Our understanding of gene regulation is still fairly primitive, and there is plenty of room for the discovery of important inheritance mechanisms involving histone modification. We have to keep an open mind. But the point the critics of Mukherjee are really making is that given what is known today about mechanisms of gene regulation, it is bizarre bordering on irresponsible to focus on a mechanism of inheritance that only might be real.

And Mukherjee is far from the only one to have fallen into this trap. Which brings me to what I think is the most interesting question here: why does this particular type of epigenetic inheritance involving an obscure biochemical process have such strong appeal? I think there are several things going on.

First, the idea of a “histone code” that supersedes the information in DNA exists (at least for now) in a kind of limbo: enough biochemical specificity to give it credibility and a ubiquity that makes is seem important, but sufficient mystery about what it actually is and how it might work that people can imbue it with whatever properties they want. And scientists and non-scientists alike have leapt into this molecular biological sweet spot, using this manifestation of the idea of epigenetics as a generic explanation for things they can’t understand, a reason to hope that things they want to be true might really be, and as a difficult to refute, almost quasi-religious, argument for the plausibility of almost any idea linked to heredity.

But there is also something more specifically appealing about this particular idea. I think it stems from the fact that epigenetics in general, and the idea of a “histone code” in particular, provide a strong counterforce to the rampant genetic determinism that has dominated the genomic age. People don’t like to think that everything about the way they are and will be is determined by their DNA, and the idea that there is some magic wrapper around DNA that can be shaped by experience to override what is written in the primary code is quite alluring.

Of course DNA is not destiny, and we don’t need to evoke etchings on DNA to get out of it. But I have a feeling it will take more than a few arch retorts from transcription factor extremists to erase epigenetics from the zeitgeist.

Posted in epigenetics, gene regulation, My lab, science | Comments closed

PLOS, open access and scientific societies

Several people have noted that, in my previous post dealing with PLOS’s business, I didn’t address a point that came up in a number of threads regarding the relative virtues of PLOS and scientific societies – the basic point being that people should publish in society journals because they do good things with the money (run meetings, support fellowships and grants) and that PLOS is to be shunned because it “doesn’t give back to the community”.

 

I agree that many societies do good things to build and support their communities. But sponsoring meeting and fellowships is not the only way to give back to the community. PLOS was founded to make science publishing work better for scientists and the public, and we are singularly devoted to that goal. This means publishing open access journals that succeed as journals. This means demonstrating to a skeptical publishing and funding community that it’s possible to run a successful and stable business that published exclusively open access journals. This means working to change the way peer review works and the ways scientists are assessed. This means lobbying to promote laws and policies that increase access to the scientific literature.

Because of PLOS and other open access pioneers, around 20% of new papers are immediately available for people around the world to access without paywalls. PLOS’s success as a publisher has served as a model for other publishers and journals to adopt open access. PLOS’s promotion of open access and our lobbying helped make funder “public access” policies that make millions of papers freely available a reality. And PLOS is now working to promote instant publication, open peer review and other publishing changes that not only will make science more open, but get science out more quickly and make the ways we evaluate papers and each other more effective. This is what we give back to science. People are, of course, free not to value these things, to question whether PLOS’s role in these things was significant, or that we’ve achieved our goals and are no longer essential. But it’s ridiculous to say that PLOS doesn’t give back to the community just because we don’t sponsor meetings.

Now none of this should be construed as my saying people shouldn’t publish in society journals, provided they are open access of course. One of the reasons we started PLOS was because, back in the late 1990’s, most scientific societies rejected the idea that they could take advantage of the Internet’s power to make their work more widely available by using a different business model. We felt they were wrong, and one of PLOS’s main goals has always been to demonstrate that an open access business model could work for them – and I’m thrilled that in many cases this has work – see open access society journals like G3 and mBio, journals that I wholeheartedly and unambiguously support.

However, a lot of society journals – most – are not open access. And no matter how many meetings and fellowships the revenue from paywalled journals support, they are not worth it – I’ve yet to see a society whose good works were so good that they outweighed the harm of paywalling the scientific literature – using meetings as an excuse to paywall the scientific literature is completely unacceptable.

The reliance of so many societies on journal revenues has often made it hard to distinguish them from commercial publishers in their public stance on important issues in science publishing. You would think that, on first principles, scientific societies would support improving access to the scientific literature. Indeed several societies recognized this early on and pioneered open access and other open publishing business models before PLOS came along. However they are the exception. The most powerful societies have for decades not only been trading meetings for access to the literature, they have been using the profits they get from their journals to openly fight open access. Opposition from scientific societies was one of the major reasons for the scuttling of Harold Varmus’s 1999 eBioMed proposal, which would have created an NIH managed pre-print server with a full system of post-publication peer review. And for years major scientific societies were THE loudest voices on Capitol Hill arguing AGAINST the NIH public access policy and other moves for better access to the scientific literature.

Screen Shot 2016-03-16 at 9.23.33 AM

I also have long wondered whether it’s good for societies in a more general sense when they are reliant on publishing revenues for their funding. Societies are supposed to be organizations that represent their members, and yet the concept of being a member of a society has been weakened by the fact that few people actively choose to become a member of a society to support their activities and have a voice in their policies. Rather people become society members because it gets them access to journals and/or discounts to meetings. I love the Genetics Society of America, but they and many other societies do this weird thing where, if you go to one of their meetings, the cost of attending the meeting as a non-member is greater than the cost of attending as a member plus the cost of membership, so of course everyone “joins” the society. But this kind of membership is weak. And I wonder whether people wouldn’t feel more engaged in their societies, and if societies wouldn’t be more responsive to their members, if they became true membership organizations once again.

Finally, I want to return to the issue of finances. One of the threads in Andy Kern’s series of Tweets about PLOS finances that triggered this series of posts was his surprise that PLOS had margins of ~20% and had ~$25m in assets. In response I encouraged him to look at the finances of scientific societies. I think it’s good that Andy has triggered a conversation about PLOS’s finances – most people are unaware of how the publishing business works – something that’s important if we’re going to change it for the better. And similarly I think it would be great to learn more about the finances of the scientific societies that people support – most of whom not only file required Form 990s, but also offer more detailed financial reports. Some of the stuff you find is disturbing (like the fact that the American Chemical Society, long one of the fiercest opponents of open access, is sitting on $1.5b in assets) but most of it is just enlightening. I’ve compiled a list of Form 990s from the member societies of FASEB, and will be adding more information in the coming days.

 

Posted in open access, PLoS | Comments closed

On pastrami and the business of PLOS

Last week my friend Andy Kern (a population geneticist at Rutgers) went on a bit of a bender on Twitter prompted by his discovery of PLOS’s IRS Form 990 – the annual required financial filing of non-profit corporations in the United States. You can read his string of tweets and my responses, but the gist of his critique is this: PLOS pays its executives too much, and has an obscene amount of money in the bank.

Let me start by saying that I understand where his disdain comes from. Back when we were starting PLOS we began digging into the finances of the scientific societies that were fighting open access, and I was shocked to see how much money they were sitting on and how much their CEOs get paid. If I weren’t involved with PLOS, and I’d stumbled upon PLOS’s Form 990 now, I’d have probably raised a storm about it. I have absolutely no complaints about Andy’s efforts to understand what he was seeing – non-profits are required to release this kind of financial information precisely so that people can scrutinize what they are doing. And I understand why Andy and others find some of the info discomforting, and share some of his concerns. But having spent the last 15 years trying to build PLOS and turn it into a stable enterprise, I have a different perspective, and I’d like to explain it.

Let me start with something on which I agree completely with Andy completely, science publishing is way too expensive. Andy says he originally started poking into PLOS’s finances because he wanted to know where the $2,250 he was asked to pay to publish in PLOS Genetics went to, as this seemed like a lot of money to take a paper, have a volunteer academic serve as editor, find several additional volunteers to serve as peer reviewers, and then, if they accept the paper, turn it into a PDF and HTML version and publish it online. And he’s right. It is too much money.

That $2,250 is only about a third of the $6,000 a typical subscription journal takes in for every paper they publish, and that $6,000 buys access for only a tiny fraction of the world’s population, while the $2,250 buys it for everyone. But $2,250 is still too much, as is the $1,495 at PLOS ONE. I’ve always said that our goal should be to make it cost as little as possible to publish, and that our starting point should be $0 a paper.

The reality is, however, that it costs PLOS a lot more than $0 to handle a paper. We handle a lot of papers – close to 200 a day – each one different.  There’s a lot of manual labor involved in making sure the submission is complete, that it passes ethical and technical checks, in finding an editor and reviewers and getting them to handle the paper in a timely and effective manner. It then costs money to turn the collection of text and figures and tables into a paper, and to publish it and maintain a series of high-volume websites. All together we have a staff of well over 100 people running our journal operations, and they need to have office space, people to manage them, an HR system, an accounting system and so on – all the things a business has to have. And for better or worse our office is in San Francisco (remember that two of the three founders were in the Bay Area, and we couldn’t have started it anywhere else), which is a very expensive place to operate. We have always aimed to keep our article processing charges (APCs) as low as possible – it pains me every time we’ve had to raise our charges, since I think we should be working to eliminate APCs, not increase them. But we have to be realistic about what publishing costs us.

The difference in price between our journals reflects different costs. PLOS Biology and PLOS Medicine have professional editors handling each manuscript, so they’re intrinsically more expensive to operate. They also have relatively low acceptance rates, meaning a lot of staff time is spent on rejected papers, which generate no revenue. This is also the reason for the difference in price between our community journals like PLOS Genetics and PLOS ONE: the community journals reject more papers and thus we have to charge more per accepted paper. It might seem absurd to have people pay to reject other people’s papers, but if you think about it, that’s exactly what makes selective journals attractive – they have to publish your paper and reject lots of others. I’ve argued for a long time that we should do away with selective journals, but so long as people want to publish in them, they’re going to have this weird economics. And note this is not just true of open access journals – higher impact subscription journals bring in a lot more money per published paper than low impact subscription journals, for essentially the same reason.

Could PLOS do all these things more efficiently, more effectively and for less money? Absolutely. We, like most other big publishers, are using legacy software and systems to handle submissions, manage peer review and convert manuscripts into published papers. These systems are, for the most part, expensive, outdated and difficult or expensive (usually both) to customize. We are in a challenging situation since, until very recently, we weren’t in a position to develop our own systems for doing all these things, and we couldn’t just switch to cheaper or free system since they weren’t built to handle the volume of papers we deal with.

That said, it’s certainly possible to run journals much, much more cheaply. It costs the physics pre-print arXiv something like $10 a paper to maintain its software, screening and website. There are times when I wish PLOS had just hacked together a bunch of Perl scripts and hung out a shingle and built in new features as we needed them. But part of what made PLOS appealing at the start is that it didn’t work that way – for better or worse it looked like a real journal, and this was one of the things that made people comfortable with our (at the time) weird economic model. I’m not sure this is true anymore, and if I were starting PLOS today I would do things differently, and think I could do things much less expensively. I would love it if people would set up inexpensive or even free open access biology journals – it’s certainly possible with open source software and fully volunteer labor – and for people to get comfortable with biomedical publishing basically being no different than just posting work on the Internet, with lightweight systems for peer review. That has always seemed to me to be the right way to do things. But PLOS can’t just pull the plug on all the things we do, so we’re trying to achieve the same goal by investing in developing software that will make it possible to do all of the things PLOS does faster, better and cheaper. We’re going to start rolling it out this year, and, while I don’t run PLOS and can’t speak for the whole board, I am confident that this will bring our costs down significantly and that we will ultimately be in a position to reduce prices.

Which brings us to issue number two. Andy and a lot of other people took umbrage at the fact that PLOS has margins of 20% and has ~$25 million dollars in assets. Again, I understand why people look at these numbers and find them shocking – anything involving millions of dollars always seems like a lot of money. But this is a misconception. Both of these numbers represent nothing more than what is required for PLOS to be a stable enterprise.

I’ll start by reminding people that PLOS is still a relatively young company, working in a rapidly changing industry. Like most startups, it took a long time for PLOS to break even. For the first nine years of our existence we lost money every year, and were able to build our business only because we got strong support from foundations that believed in what we were doing. Finally, in 2011, we reached the point where we were taking in slightly more money than we were spending, allowing us to wean ourselves of foundation support. But we still had essentially no money in the bank, and that’s not a good thing. Good operating practices for any business dictate that the company have money in the bank to cover a downturn in revenue. This is particularly the case with open access publishers, since we have no guaranteed revenue stream – in contrast to subscription publishers who make long-term subscription deals. What’s more, this industry is changing rapidly, with the number of papers going to open access journals growing, but many new open access publishers entering the market. So it’s very hard for us to predict what our business is going to look like from year to year, while a lot of our expenses, like rent, software licenses and salaries, have to be paid before revenue they enable comes in. The only way to survive in this market is to have a decent amount of money in the bank to buffer against the unpredictable. If anything, I am told by people who spend their lives thinking about these things, we’re cutting things a little close. So, while 20% margins may seem like a lot, given our overall financial situation and the fact that we’ve been profitable for only five years, I think it’s actually a reasonable compromise between keeping costs as low as we can and ensuring that PLOS remains financially stable while also allowing us to make modest investments in technology that will make publishing better and cheaper in the long run.

Just to put these numbers in perspective for people who (like me) aren’t trained to think about these things, I had a look at the finances of a large set of scientific societies. I looked primarily at the members of FASEB, a federation of most of the major societies in molecular biology. Many of them have larger operating margins, and far larger cash reserves than PLOS. And I haven’t found one yet that doesn’t have a larger ratio of assets to expenses than PLOS does. And these are all organizations that have far more stable revenue streams than PLOS does. So I just don’t think it’s fair to suggest that either PLOS’s margins or reserves are untoward.

Indeed these numbers represent something important – that PLOS has become a successful business. I’ll once again remind people that one of the major knocks against open access when PLOS started was that we were a bunch of naive idealists (that’s the nicest way people put it) who didn’t understand what it took to run a successful business. Commercial publishers and societies alike argued repeatedly to scientists, funders and legislators that the only way to make money in science publishing was to use a subscription model. So it was absolutely critical to the success of the open access movement that PLOS not only succeed as a publisher, but that we also succeed as a business – to show the commercial and society publishers that their principal argument for why they refused to shift to open access was wrong. Having been the recipient of withering criticism – both personally and and as organization – about being too financially naive, it’s ironic and a bit mind boggling to all of a sudden be criticized for having created too good of a business.

Now despite that, I don’t want people to confuse my defense of PLOS’s business success with a defense of the business it’s engaged in. While I believe the APC/service business model PLOS has helped to develop is far far superior to the traditional subscription model, because it does not require paywalls, but I’ve never been comfortable with the APC business model in an absolute sense (and I recognize the irony of my saying that) because I wish science publishing weren’t a business at all. When we started PLOS the only way we had to make money was through APCs, but if I had my druthers we’d all just post papers online in a centralized server funded and run by a coalition of governments and funders, and scientists would use lightweight software to peer review published papers and organize the literature in useful ways. And no money would be exchanged in the process. I’m glad that PLOS is stable and has shown the world that the APC model can work, but I hope that we can soon move beyond it to a very different system.

Now I want to end on the issue that seemed to upset people the most – which is the salaries of PLOS’s executives. I am immensely proud of the executive team at PLOS – they are talented and dedicated. They make competitive salaries – and we’d have trouble hiring and retaining them if they didn’t. The board has been doing what we felt we had to do to build a successful company in the marketplace we live in – after all, we were founded to fix science publishing, not capitalism. But as an individual I can’t help but feel that’s a copout. The truth is the general criticism is right. A system where executives make so much more money that the staff they supervise isn’t just unfair, it’s ultimately corrosive. It’s something we all have to work to change, and I wish I’d done more to help make PLOS a model of this.

Finally, I want to acknowledge a tension evident in a lot of the discussion around this issue. Some of the criticism of PLOS – especially about margins and cash flow – have been just generally unfair. But others – about salaries and transparency – reflect something important. I think people understand that in these ways PLOS is just being a typical company. But we weren’t founded to just be a typical company – we were founded to be different and, yes, better, and people have higher expectations of us than they do a typical company. I want it to be that way. But PLOS was also not founded to fail – that would have been terrible for the push for openness in science publishing.I am immensely proud of PLOS’s success as a publisher, agent for change, and a business – and of all the people inside and outside of the organization who helped achieve it. Throughout PLOS’s history there were times we had to choose between abstract ideals and the reality of making PLOS a successful business, and I think, overall, we’ve done a good, but far from perfect, job of balancing this tension. And moving forward I personally pledge to do a better job of figuring out how to be successful while fully living up to those ideals.

 

Posted in open access, PLoS | Comments closed

Berkeley’s Handling of Sexual Harassment is a Disgrace

What more is there to say?

Another case where a senior member of the Berkeley faculty, this time Berkeley Law Dean Sujit Choudhry, was found to have violated the campus’s sexual harassment policy, and was given a slap on the wrists by the administration. Astronomer Geoff Marcy’s punishment for years of harassment of students was a talking to and a warning never to do it again, and now Choudhry was put on some kind of secret probation for a year, sent for additional training, and docked 10% of his meager $470,000 a year salary.

Despite a constant refrain from senior administrators that it takes cases of sexual harassment seriously, the administrations actions demonstrate that it does not. What is the point of having a sexual harassment policy if violations of it have essentially no sanctions? Through its responses to Marcy and Choudhry, it is now clear that the university views sexual harassment by its senior male faculty not as what it is – an inexcusable abuse of power that undermines the university’s entire mission and has a severe negative effect on our students and staff – but rather as a mistake that some faculty make because they don’t know better.

If the university wants to show that it is serious about ending sexual harassment on campus, then it has to take cases of sexual harassment seriously. This means being unambiguous about what is and is not acceptable behavior, and real consequences when people violate the rules. Faculty and administrators who engage in harassing behavior don’t do it by accident. They make a choice to engage in behavior they either know is wrong, or have no excuse for not knowing is wrong. And, at Berkeley at least, they do so knowing that if they get caught, the university will respond by saying “Bad boy. Don’t do that again. We’re watching you now.” Does anything think this is an actual deterrent?

Through its handling of the Marcy,  Choudhry and other cases, the Berkeley administration has shown utter contempt for the welfare of its students and staff. It has shown that it views its job not to create an optimal environment for education by ensuring that faculty behavior is consistent with the university’s mission, but rather to protect faculty, especially famous ones, from the consequences of their actions.

It is now clear that excuse making and wrist slapping in response to sexual harassment is so endemic in the Berkeley administration that it might as well be official policy. And just like there is no excuse for sexual harassing students and staff, there is no excuse for sanctioning this kind of the behavior. It’s time for the administrators – all of them – who have repeatedly failed the campus community on this issue to go. It’s the only way forward.

BerkeleyOrgChart

Posted in Uncategorized | Comments closed

I’m Excited! A Post Pre-Print-Posting-Powwow Post

I just got back from attending a meeting organized by a new group called ASAPbio whose mission is to promote the use of pre-prints in biology.

I should start by saying that I am a big believer in this mission. I have been working for two decades to convince biomedical researchers that the Internet can be more than a place to download PDFs from paywalled journal websites, and universal posting of pre-prints – or “immediate publication” as I think it should be known – is a crucial step towards the more effective use of the Internet in science communication. We should have done this 20 years ago, when the modern Internet was born, but better late than never.

There were reasons to be skeptical about this meeting. Change needs to happen on the ground not in conference halls – I have been to too many publishing meetings that involved a lot of great talks about the problems with publishing and how to fix them, but which didn’t amount to much because these calls weren’t translated into action. Second, the elite scientists, funders and publishers who formed the bulk of the invite-only ASAPbio attendees have generally been the least responsive to calls to reform biomedical publishing (I understand why this was the target group – while young, Internet-savvy scientists tend to be much more supportive in principle, they are reluctant to act because of fears about how it will affect their careers, and are looking towards the establishment to take the first steps). Finally, my new partner-in-crime Leslie Vosshall and I spent a lot of time and energy trying to rally support for pre-prints online leading up to the meeting, and it wasn’t like people were knocking down the doors to sign on to the cause.

However, I wouldn’t have kept at this for almost half my life it I wasn’t an eternal optimist, and I went into the meeting hoping, if not believing, that this time might be different. And I have to say I was pleasantly surprised. By the end of the meeting’s 24 hours it seemed like nearly everyone in attendance was sold on the idea that biomedical researchers should all post pre-prints of their work, and had already turned their attention to questions about how to do it. And there was a surprisingly little resistance to the idea that post-publication review of papers initially posted as pre-prints could, at least in principle, fulfill the functions that pre-publication review currently carries out. That’s not to say there weren’t concerns and even some objections – there were, as I will discuss below. But these were all dealt with to varying degrees, and there seemed to be a general attitude these concerns can be addressed, and did not constitute reasons not to proceed.

Honestly, I don’t think any new ideas emerged from the meeting. Everything that was discussed has been discussed and written about extensively before. But the purpose of the meeting was not to break new ground. Rather I think the organizers were trying to do three things (I’m projecting a bit here since I wasn’t one of the organizers):

  • To transfer knowledge from the small group of us who have been in the trenches of this movement to prominent members of the research community who are open to these ideas, but who hadn’t really ever given them much thought or attention
  • To make sure potential pitfalls and challenges of pre-prints were discussed. Although the meeting was dominated by members of the establishment, there were several young-PIs and postdocs, representatives of different fields and a few international participants, who raised a number of important issue and generally kept the meeting from becoming a self-congratulatory elite-fest.
  • To inspire everyone to act in tangible ways to promote pre-print use.

And I think the meeting was highly effective all three regards. For those of you who weren’t there and didn’t follow online or on video, here’s a rough summary of what happened (there are archived videos here).

The opening night was dominated by a keynote talk from Paul Ginsparg, who in 1991 started an online pre-print server for physics that is now the locus for the initial publishing of essentially all new work in physics, mathematics and some areas of computer science. Paul is a personal hero of mine – for what he did with arXiv and for just being a no bullshit advocate for sanity in science publishing – so I was bummed that he couldn’t make it person because of weather-related travel issues. But his appearance as a giant head on a giant screen by video-conference was a fitting representation for his giant place in pre-print history. His talk was very effective in squashing any of the typical gloom-and-doom about the end of quality science that often happens when pre-prints are discussed. A little bit of biology exceptionalism came up in the Q&A (“Yeah, it works for physics, but biology is different…”) but I thought Paul put most of those ideas to rest, especially the idea that all physics is done by giant groups working underground surrounded by large metal tubes.

The second day had two sessions, each structured around a series of a dozen or so five minute talks, followed by breakout sessions and then discussion. The morning focused on why people don’t use pre-prints – concerns about establishing priority, being able to publish in journals, getting jobs and funding – and how to address these concerns, while the afternoon sessions were about how to use pre-prints in evaluating papers and scientists and in finding and organizing published scientific information.

I can’t summarize everything that was discussed, but I have a lot of  thoughts on the meeting and where to go from here in no particular order:

I was surprised at how uncontroversial pre-prints were

Having watched the battles over Harold Varmus’ proposal to have biologists embrace pre-prints in 1999, and having taken infinite flak over the last 20 years for promoting a model of science communication based on immediate publication and post-publication peer review, I expected the idea that biologists should make their work initially available as pre-prints to be controversial. But it wasn’t. Essentially everyone at the meeting embraced the basic concept of pre-prints from the beginning, and we spent most of the meeting discussing details about how a pre-print system in biology can and should work, and how to build momentum for pre-print use.

I honestly don’t know how this happened. Pre-prints are close to invisible in biology (we didn’t really have a viable pre-print server until a year or so ago) and other recent efforts to promote pre-print usage in biology have been poorly received. There is lots of evidence from social media that most members of the community fall somewhere in the skeptical to hostile range when discussing pre-prints. Some of it is selection bias – people hostile to pre-prints weren’t likely to agree to come to a meeting on pre-prints that they (mostly) had to pay their own way to attend.

But I think it’s bigger than that. I think the publishing zeitgeist may have finally shifted. I’ve felt this way before, so I’m not sure I’m a good witness. But I think people are really ready for it this time. The signs were certainly there: after all Ron Vale, who organized ASAPbio, is no publishing radical – his publishing record is everything I’ve been trying to fight against for the last 20 years. But now he’s a convert, at least on pre-prints, and others are following suit. I don’t know whether it’s because all our work has finally paid off, or if it’s just time. The Internet has become so ingrained in our lives, maybe people finally realized how ridiculous it is that people all over the world could watch the ASAPbio meeting streaming live on their computers, but they have to wait months and months and months to be able to read about our latest science.

In the end I don’t really care why things seem to have changed. Even as I redouble my efforts to make sure this moment doesn’t elude us, I’m going to celebrate – this has been a long time coming.

Glamour journals remain a huge problem

One of the most shocking moments of this meeting came in a discussion right before the close about how to move forward to make pre-prints work. Marc Kirschner, a prominent cell biologist, made the suggestion that people at the meeting publish pre-prints of their papers at the time of submission so long as it is OK with the journal they plan to submit it to. I don’t think Kirschner was trying to set down some kind of abstract principle. Rather I think he was speaking to the reality that no matter how effectively we sell pre-prints, in the short run most scientists are still going to strive to put their work in the highest profile journals they can get them into; and we can make progress with pre-prints if we point out that a lot of journals people choose to publish in for other reasons allow them to post pre-prints and they should avail themselves of this opportunity.

This was the one time at the meeting where I lost my cool (a publishing meeting where I lose my cool only once is a first). It’s not that it surprises me that journals have this kind of hold on people. But I was still flabbergasted that after a meeting whose entire point was that it would be really good for science if people posted pre-prints, someone could suggest that we should give journals – not scientists – the power to decide whether pre-print posting is okay. And I couldn’t believe that people in the audience didn’t rise up in outrage at the most glaring and obvious example of how dysfunctional and toxic – one might even say dystopian – our relationship to journals is.

This is why I maintain my position – echoed by Vitek Tracz at the meeting, and endorsed by a handful of others – that science communication is never going to function optimally until we rid ourselves of the publish or reject paradigm employed by virtually all journals, and  until we and stop defining our success as scientists based on whether or not we could winkle our way into one of the uber-exclusive slots in glamorous journals. If anything is going to stop the move towards pre-prints, it’s going to be our proclivity for “glamor humping” (as blogger DrugMonkey has aptly dubbed this phenomenon). And if anything has the power to undermine the benefits of pre-prints, it’s if we allow this mentality to dominate in the post-journal world.

People have weird views of priority

One of the few new things I learned at this meeting is how obsessed a large number of people are with technical definitions of priority. We spent 30 minutes talking about whether pre-prints should count in establishing priority for discoveries. First of all, I can’t believe there’s any question about this – of course they should! But more importantly who thinks that questions of priority actually get decided by carefully scrutinizing who published what, when and on what date? It’s a lovely scholarly ideal to imagine that there’s some kind of court of science justice where hearings are held on every new idea or discovery, and a panel of judges looks at everything that’s been published or said about the idea is presented, and they then rule on who really was the first to publish, or present, the idea/discovery in a sufficiently complete form to get credit for it.

But I got news for all the people counting submission dates on the head of a pin – outside of patent cases, where such courts really do exist, at least in theory, that ain’t the way it works. True priority is constantly losing out in the real world, where who you are, where you work, where you publish and how you sell yourself are often far more important than submission or publication dates in determining who gets credit (and its trappings) for scientific advances.

Cell Press has a horrible, but kind of sane, policy on pre-prints

One of the things that I think a lot of people coming to the meeting didn’t realize is that many journals are perfectly fine with people posting pre-prints of articles that are being considered by the journal. Some, like eLife, PLOSPeerJ and Genetics actively encourage it. Others, like EMBOPNASScience and all Nature journals unambiguously allow pre-print posting. On the flip side, journals from the American Chemical Society and some other publishers will not accept papers if they were posted as pre-prints. And then there’s Cell.

Cell‘s policy is, on the surface, hard to parse:

If you have questions about whether posting a manuscript or data that you plan to submit to this journal on an openly available preprint server or poster repository would affect consideration, we encourage you to contact an editor so that we may provide more specific guidance. In many cases, posting will be possible.

Fortunately, Emilie Marcus, CEO of Cell Press and Editor-in-Chief of Cell, was at the meeting to explain it to us. Her response was, and I’m paraphrasing but I think I’m capturing it correctly, is that they are happy to publish papers initially posted as pre-prints so long as the information in the paper had not already been noticed by people in the field. In other words, it’s ok to post pre-prints so long as nobody noticed the pre-print. That is, they are rather unambiguously not endorsing the point of pre-prints, which is to get your work out to the community more quickly and effectively.

This is a pretty cynical policy. Cell clearly wants to get credit for being down with pre-prints without actually sanctioning them. But I actually found Marcus’s explanation of the policy to make sense, in a way. She views Cell as a publisher, and, as such, its role is to make information public. If that information has already been successfully conveyed by other means, then the role of publisher is no longer required.

This is obviously a quaint view – Cell is technically a publisher, but it’s more important role is as a selector of research that it deems to be interesting and important. So I think it’s more appropriate to look at this as a business decision. In refusing to help make pre-prints a reality, Elsevier and Cell Press are acting as if they believe pre-prints are a threat to their bottom line. And they’re right. Because if pre-prints become universal, who in their right mind is going to subscribe to Cell?

Maybe the other journals that endorse pre-prints are banking on the symbiosis between pre-prints and journals that exists in physics being extended to biomedicine. In questions after his talk Ginsparg said that ~80% of papers published in the arXiv are ultimately published in a peer-reviewed journal. And these journals are almost exclusively subscription based. So why don’t libraries cancel these subscriptions? The optimistic answer (for those who like journals) is that libraries want to support the services journals provide and are willing to pay for them even if they’re not providing access to the literature. This may be true. But the money in physics publishing is a drop in the bucket compared to biomedicine, and I just can’t see libraries continuing to spend millions of dollars per year on subscriptions to journals that provide paywalled access to content that is freely available elsewhere. I could be wrong, of course, but it seems like Elsevier, who for all their flaws clearly know how to make money, in this case agrees with me.

I don’t know what effect the Cell policy will have in the short run. I’d like to think people who are supportive of pre-prints will think twice before sending a paper to Cell in the future because of this policy (of course I’d like it if they never considered Cell in the first place, but who am I kidding). But I suspect this is going to be a drag on the growth of pre-prints — how big a drag, I don’t know, but it’s something we’re probably going to have to work around.

There are a lot of challenges in building a fair and effective pre-print system

The position of young scientists on pre-prints is interesting. On the one hand, they have never scienced without the Internet, and are accustomed to being able to get access to information easily and quickly. On the other hand, they are afraid that the kinds of changes we are pushing will make their lives more difficult, and will make many of the pathologies in the current system worse, especially those biased against them, worse. Even those who have no reservations about the pre-prints and/or post-publication review, don’t feel like they’re in a position to lead the charge.

This is one of the biggest challenges we have moving forward. I have no doubt that science communication systems build around immediate publication and post-publication review can be better for both science and scientists. But that doesn’t mean they automatically will be better. Indeed, I share many of other’s concerns about turning science into an even bigger popularity contest than it already is; about making it easier for powerful scientists to reinforce their positions and thwart their less powerful competitors; about increasing the potency of biases the myriad biases that poison training, hiring, promotion and funding; about making the process of receiving feedback on your work even less pleasant and uncollegial than it already is; and about increasing the incentives for scientists to prioritize glamour over doing rigorous, high-quality and durable work.

I will write more elsewhere about these issues and how I think we should try to address them. But it is of paramount importance that everybody who is trying to promote the move to pre-prints and beyond, and who is building systems to do this, be mindful of all these risks and do everything in their power to make sure the new systems work for everyone in science. We have to remember that for every bigshot who opposes pre-prints because they want to preserve their ability to publish in Cell, there are hundreds of scientists who just want to preserve their ability to do science. If this latter group doesn’t believe that pre-print posting is good for them, we will not only fail to convince them to join us on this path, but we run the serious risk of making science worse than it already is. And that would be a disaster.

Will attendees of the meeting practice what they preached

Much of the focus of the meeting organizers was on getting people who attended the meeting to sign on to a series of documents expressing various types of commitment to promoting pre-prints in biomedicine (you can see these on the ASAPbio site). These documents are fairly strong, and I will sign them. But I’m sick of pledges. I’ve been down this path too many times before. People come to meetings, they sign a document saying they do all sorts of great stuff, and then they forget about it.

The only thing that matters to me is making sure that the people who attended the meeting and who seemed really energized about making pre-prints work start to put this enthusiasm into practice immediately. I look forward to quick, concrete action from funders. But the immediate goal of the scientists at the meeting or who support its goals must be to start posting pre-prints. This is especially true of prominent, senior scientists. There were four Nobelists at the meeting, many members of national academies, and other A-list scientists. It’s a small number of people in the grand scheme of things, but if these scientists demonstrate that they are really committed to making pre-prints by starting to post pre-prints in the next week (I suspect that most people at this level have a paper under review at all time). I am confident that their commitment is genuine – indeed some have already posted pre-prints from their labs since the meeting ended yesterday.

Obviously we don’t want pre-prints to be the domain of the scientific 1%. But we have to start somewhere, and if people who have nothing to lose won’t lead the way, then it will never happen. But it seems like they actually are leading the way. There’s tons more hard work to do, but let’s not miss this opportunity. The rainbow unicorn is watching.

ArcLive Rainbow Unicorn

 

Posted in open access, science | Tagged , , | Comments closed

The Villain of CRISPR

Eric LanderThere is something mesmerizing about an evil genius at the height of their craft, and Eric Lander is an evil genius at the height of his craft.

Lander’s recent essay in Cell entitled “The Heroes of CRISPR” is his masterwork, at once so evil and yet so brilliant that I find it hard not to stand in awe even as I picture him cackling loudly in his Kendall Square lair, giant laser weapon behind him poised to destroy Berkeley if we don’t hand over our patents.

This paper is the latest entry in Lander’s decades long assault on the truth. During his rise from math prodigy to economist to the de facto head of the public human genome project to member of Obama’s council of science advisors to director of the powerful Broad Institute, he has shown an unfortunate tendency to treat the truth as an obstacle that must be overcome on his way to global scientific domination. And when one of the world’s most influential scientists treats science’s most elemental and valuable commodity with such disdain the damage is incalculable.

CRISPR, for those of you who do not know, is an anti-viral immune system found in archaea and bacteria, that until a few years ago, was all but unknown outside the small group of scientists, mostly microbiologists, who had been studying it since its discovery a quarter century ago. Interest in CRISPR spiked in 2012 when a paper from colleagues of mine at Berkeley and their collaborators in Europe described a simple way to repurpose components of the CRISPR system of the bacterium Streptococcus pyogenes to cut DNA in a easily programmable manner.

Such capability had been long sought by biologists, as targeted DNA cleavage is the first step in gene editing – the ability to replace one piece of DNA in an organism’s genome with DNA engineered in the lab. This 2012 paper from Martin Jinek and colleagues was quickly joined by a raft of others applying the method in vivo, modifying and improving it in myriad ways, and utilizing its components for other purposes. Among the earliest was a paper from Le Cong and Fei Ann Ran working at Lander’s Broad Institute which described CRISPR-based gene editing in human and mouse cells.

Now, less than four years after breaking onto the gene-editing scene, virtually all molecular biology labs are either using, or planning to use, CRISPR in their research. And amidst this explosion of interest, fights have erupted over who deserves the accolades that usually follow such scientific advances, and who owns the patents on the use of CRISPR in gene editing.

The most high-profile of these battles pit Berkeley against the Broad Institute, although researchers from many other institutions made important contributions. Jinek’s work was carried out in the lab of Berkeley’s Jennifer Doudna, and in close collaboration with Emmanuelle Charpentier, now at the Max Planck Institute for Infection Biology in Berlin; while Cong and Ran were working under the auspices of the Broad’s Feng Zhang. Interestingly, the prizes for CRISPR have largely gone to Doudna and Charpentier, while, for now at least, the important patents are held by Zhang and the Broad. But this could all soon change.

There has been extensive speculation that CRISPR gene editing will earn Doudna and Charpentier a Nobel Prize, but there has been considerable lobbying for Zhang to join them (Nobel Prizes are, unfortunately, doled out to a maximum of three people). On the flip side, the Broad’s claim to the patent is under dispute, and is the subject a legal battle that could turn into one of the biggest and most important in biotechnology history.

I am, of course, not a disinterested party. I know Jennifer well and an thrilled that her work is getting such positive attention. I also stand to benefit professionally if the patents are awarded to Berkeley, as my department will get a portion of what are likely to be significant proceeds (I have no personal stake in any CRISPR-related patents or companies).

But I if I had my way, there would be no winner in either of these fights. The way prizes like the Nobel give disproportionate credit to a handful of individuals is an injustice to the way science really works. When accolades are given exclusively to only a few of the people who participated in an important discovery, it by necessity denies credit to countless other people who also deserve it. We should celebrate the long series of discoveries and inventions that brought CRISPR to the forefront of science, and all the people who participated in them, rather than trying to decide which three were the most important.

And, as I have long argued, I believe that neither Berkeley nor MIT should have patents on CRISPR, since it is a disservice to science and the public for academic scientists to ever claim intellectual property in their work.

Nonetheless, these fights are underway. Which beings us back to Dr. Lander. Although he had nothing to do with Zhang’s CRISPR work, as Director of the Broad Institute, he has taken a prominent role in promoting Zhang’s case for both prizes and patent. But rather than simply go head-to-head with Doudna and Charpentier, Lander has crafted an ingenious strategy that is as clever as it is dishonest (see Nathaniel Comfort’s fantastic “A Whig History of CRISPR” for more on this). Let’s look at the way Lander’s argument is crafted.

To start, Lander cleaves history into two parts – Before Zhang and After Zhang – defining the crucial event in the history of CRISPR to be the demonstration that CRISPR could be used for gene editing in human cells. This dividing line is made explicit in Figure 2 of his “Heroes” piece, which maps the history of CRISPR with circles representing key discoveries. The map is centered on a single blue dot in Cambridge, marking Zhang as the sole member of the group that carried out the “final step of biological engineering to enable genome editing”, while everyone who preceded him gets labeled as a green natural historian or red biochemist.

Screen Shot 2016-01-24 at 7.49.00 PM

(Note also how he distorted the map of the world so that the Broad lies almost perfectly in the center. What happened to Iceland and Greenland? How did Europe get so far south and so close to North America? And what happened to the rest of the world? Where’s Asia, for example? Shouldn’t there be a big blue circle in Seoul?)

While some lawyer might find this argument appealing, it is a scientifically absurd point of view. For the past decade, researchers, including Zhang, have been using proteins – zinc finger nucleases and TALENs – engineered to cut DNA in specific places to carry out genome editing in a variety of different systems. If there was a key step in bringing CRISPR to the gene editing party, it was the demonstration that its components could be used as a programmable nuclease, something that arose from a decade’s worth of investigation into how CRISPR systems work at the molecular level. Once you have that, the application to human cells, while not trivial, is obvious and straightforward.

The best analogy for me is the polymerase chain reaction (PCR) another vital technique in molecular biology that emerged from the convergence of several disparate lines of work over decades, and which gained prominence with the work of Kary Mullis, who demonstrated an efficient method for amplifying DNA sequences in vitro. Arguing that Zhang deserves singular credit for CRISPR gene editing is akin to arguing that whomever was the first to amplify human DNA using PCR should get full credit for its invention. (And I’ll note that the claim that Zhang was unambiguously the first to do this is questionable – see this and this for example).

I want to be clear that in arguing against giving exclusive credit to Zhang, I am not arguing for singular credit to go to any other single group, as I think this does not do justice to the way science works. But if you are going to engage in this kind of silliness, one should at least endeavor to do it honestly. The only reason one would ever argue that CRISPR credit should be awarded to the person who first deployed it in human cells is if you decided in advance that full credit should go to Zhang and you searched post facto for a reason to make this claim.

Even Lander seems to have sensed that he had to do more than just make a tenuous case for Zhang – he had to also tear down the case for Doudna and Charpentier. And this wasn’t going to be easy, since their paper preceded Zhang’s, and they were already receiving widespread credit in the biomedical community for being its inventors. Here is where his evil genius kicks in. Instead of taking Doudna and Charpentier on directly, he did something much more clever: he wrote a piece celebrating the people whose work had preceded and paralleled theirs.

This was an evil genius move for several reasons:

First, the people whose work Lander writes about really are deserving of credit for pioneered the study of CRISPR, and they really have been unfairly written out of the history in most stories in the popular and even scientific press. This established Lander as the good guy, standing up to defend the forgotten scientists, toiling in off-the-beaten-path places. And even though, in my experience, Doudna and Charpentier go out of their way to highlight this early work in their talks, Lander’s gambit makes them look complicit in the exclusion.

Second, by going into depth about the contributions of early CRISPR pioneers, Lander is able to almost literally write Doudna and Charpentier (and, for that matter, the groups of genome-editing pioneer George Church and Korean scientist Jin-Soo Kim, whose CRISPR work has also been largely ignored) out of this history. They are mentioned, of course, but everything about the way they are mentioned seems designed to minimize their contributions. They are given abbreviated biographies compared to the other scientists he discusses. And instead of highlighting the important advances in the Jinek paper, which were instrumental to Zhang’s work, Lander focuses instead on the work of Giedrius Gasiunas working in the lab of Virginijus Siksnys in Lithuania. Lander relates in detail how they had similar findings to Jinek and submitted their paper first, but struggled to get it published, suggesting later in the essay that it was Doudna and Charpentier’s savvy about the journal system, and not their science, that earned them credit for CRISPR.

The example of Gasuinas and Siksnys is a good one for showing how unfair the system we have for doling out credit, accolades and intellectual property in science can be. While Gasuinas did not combine the two RNA components of the CRISPR-Cas9 system into a single “guide RNA” as was done by Jinek – a trick used in most CRISPR applications – they demonstrated the ability to reprogram CRISPR-Cas9, and were clearly on the path to gene editing. And neither Jinek or Gasuinas’s work would have been possible without the whole body of CRISPR work that preceded them.

But the point of Lander’s essay is not to elevate Siksnys, it is, as is made clear by the single blue circle on the map, to enshrine Zhang. His history of CRISPR, while entertaining and informative, is a cynical ploy, meant to establish Lander’s bonafides as a defender of the little person, so that his duplicity in throwing Siksyns under the bus when he didn’t need him anymore wouldn’t be so transparent.

What is particularly galling about this whole thing, is that Lander has a long history of attempting to rewrite scientific history so that credit goes not to the forgotten little people, but to him and those in his inner circle. The most prominent example of this is the pitched battle for credit for sequencing the human genome, in which Lander time and time again tried to rewrite history to paint the public genome project, and his role in it, in the most favorable light. 

Indeed, far from being regarded as a defending of lesser known scientists, Lander is widely regarded as someone who plays loose with scientific history in the name of promoting himself and those around him. And “Heroes of CRISPR” is the apotheosis of this endeavor. The piece is an elaborate lie that organizes and twists history with no other purpose than to achieve Lander’s goals – to win Zhang a Nobel Prize and the Broad an insanely lucrative patent. It is, in its crucial moments, so disconnected from reality that it is hard to fathom how someone so brilliant could have written it.

It’s all too easy to brush this kind of thing aside. After all Lander is hardly the first scientist to twist the truth in the name of glory and riches. But what makes this such a tragedy for me is that, in so many ways, Lander represents the best of science. He is a mathematician turned biologist who has turned his attention to some of the most pressing problems in modern biomedicine. He has published smart and important things. As a mathematician turned biologist myself, it’s hard for me not to be more than a little proud that a math whiz has become the most powerful figure in modern biology. And while I don’t like his scientific style of throwing millions of dollars at every problem, he has built an impressive empire and empowered the careers of many smart and talented people whose work I greatly value and respect.

But science has a simple prime directive: to tell the truth. Nobody, no matter how powerful and brilliant they are is above it. And when the most powerful scientist on Earth treats the truth with such disdain, they become the greatest scientific villain of them all.

Posted in Berkeley, CRISPR, science, University of California | Comments closed

Coupling Pre-Prints and Post-Publication Peer Review for Fast, Cheap, Fair, and Effective Science Publishing

Leslie Vosshall and I have written the following white paper as a prelude to the upcoming ASAP Bio meeting in February aimed at promoting pre-print use in biomedicine. We would greatly value any comments, questions or concerns you have about the piece or what we are proposing.


[PDF Version]

Coupling Pre-Prints and Post-Publication Peer Review for Fast, Cheap, Fair, and Effective Science Publishing

Michael Eisen1,2 and Leslie B. Vosshall 3,4

1 Department of Molecular and Cell Biology and 2 Howard Hughes Medical Institute, University of California, Berkeley, CA. 3 Laboratory of Neurogenetics and Behavior and 4 Howard Hughes Medical Institute, The Rockefeller University, New York, NY.

mbeisen@berkeley.edu; leslie@rockefeller.edu

Scientific papers are the primary tangible and lasting output of a scientist. It is how we communicate our discoveries, and how we are evaluated for hiring, promotion, and prizes. The current system by which scientific papers are published predates the internet by several hundred years, and has changed little over centuries.

We believe that this system no longer serves the needs of scientists.

  1. It is slow. Manuscripts spend an average of nine months in peer review prior to publication, and reviewers increasingly demand more data and more experiments to endorse a paper for publication. These delays massively slow the dissemination of scientific knowledge.
  2. It is expensive. We spend $10 billion a year on science and medical journal publishing, over $6,000 per article, and increasingly these costs are coming directly from research grants.
  3. It is arbitrary. The current system of peer review is flawed. Excellent papers are rejected, and flawed papers are accepted. Despite this, journal name continues to be used as a proxy for the quality of the paper.
  4. It is inaccessible. Even with the significant efforts of the open-access publishing movement, the vast majority of scientific literature is not accessible without a subscription.

In view of these problems, we strongly support the goal of ASAP Bio to accelerate the online availability of biomedical research manuscripts. If all biomedical researchers posted copies of their papers when they were ready to share them, these four major pathologies in science publishing would be cured.

The goal of ASAP Bio to get funders and other stakeholders to endorse the adoption of pre-prints is laudable. But without fundamental reform in the way that peer review is carried out, the push for pre-prints will not succeed. An important additional goal for the meeting must therefore be for funders to endorse alternative mechanisms for carrying out peer review. Such mechanisms would operate outside of the traditional journal-based system and focus on assessing the quality, audience, and impact of work published exclusively as “pre-prints”. If structured properly, we anticipate that a new system of pre-print publishing coupled with post-publication peer review will replace traditional scientific publishing much as online user-driven reviews (Amazon, Yelp, Trip Advisor, etc.) have replaced publisher-driven metrics to assess quality (Consumer Reports, Zagat, Fodor’s, etc.).

In this white paper we explain why the adoption of pre-prints and peer review reform are inseparable, outline possible alternative peer review systems, and suggest concrete steps that research funders can take to leverage changes in peer review to successfully promote the adoption of pre-prints.

Pre-prints and journal-based peer review can not coexist

The essay by Ron Vale that led to the ASAP Bio meeting is premised on the idea that we should use pre-prints to augment the existing, journal-based system for peer review. In Vale’s model, biomedical researchers would post papers on pre-print servers and then submit them to traditional journals, which would review them as they do today, and ultimately publish those works they deem suitable for their journal.

There are many reasons why such a system would be undesirable – it would leave intact a journal system that is inefficient, ineffective, inaccessible, and expensive. But more proximally, there is simply no way for such a symbiosis between pre-prints and the existing journal system to work.

Pre-print servers for biomedicine, such as BioRxiv, run by the well-respected Cold Spring Harbor Press, now offer biomedical researchers the option to publish their papers immediately, at minimal cost. Yet biologists have been reluctant to make use of this opportunity because they have no incentive to do so, and in many cases have incentives not to. If we as a biomedical community want to promote the universal adoption of pre-prints, we have to do more than pay lip-service to the potential of pre-prints, we have to change the incentives that drive publishing decisions. And this means changing peer review.

Why are pre-prints and peer review linked? Scientists publish for two reasons: to communicate their work to their colleagues, and to get credit for it in hiring, promotion and funding. If publishing behavior were primarily driven by a desire to communicate, biomedical scientists would leap at the opportunity to post pre-prints, which make their work available to the widest possible audience at the earliest possible time at virtually no cost. That they do not underscores the reality that, for most biomedical researchers, decisions about how they publish are driven almost entirely by the impact of these decisions on their careers.

Pre-prints will be not be embraced by biomedical scientists until we stop treating them as “pre” anything, which suggests that a better “real” version is yet to come. Instead, pre-prints need to be accepted as formally published works. This can only happen if we first create and embrace systems to evaluate the quality and impact of, and appropriate audience for, these already published works.

But even if we are wrong, and pre-prints become the norm, we would still need to create an alternative to journal based peer review. If all, or even most, papers are available for free online, it is all but certain that libraries would begin to cut subscriptions and traditional journal publishing, which still relies almost exclusively on revenue from subscriptions, would no longer be economically viable.

Thus a belief in the importance of pre-print use in biomedicine requires the creation of an alternative system for assessing papers. We therefore suggest that the most important act for funders, universities, and other stakeholders is not to just endorse the use of pre-prints in biomedicine, but to endorse the development and use of a viable alternative to journal titles in the assessment of the quality, impact, and audience of works published exclusively as “pre-prints”.                                                                                                

Peer review for the Internet Age

The current journal-based peer review system attempts to assure the quality of published works; help readers find articles of import and interest to them; and assign value to individual works and the researchers who created them. Post-publication peer review of works initially published as pre-prints can not only replicate these services, but do it faster, cheaper and more effectively.

The primary justification for carrying out peer review prior to publication is that this prevents flawed works from seeing the light of day. Inviting a panel of two or three experts to assess the methods, reasoning, and presentation of the science in the paper, undoubtedly leads to many flaws being identified and corrected.

But any practicing scientist can easily point to deeply flawed papers that have made it through peer review in their field, even in supposedly high-profile journals. Yet even when flaws are identified, it rarely matters. In a world where journal title is the accepted currency of quality, a deeply flawed Science or Nature paper is still a Science or Nature paper.

Prepublication review was developed and optimized for printed journals, where space had to be rationed to balance the expensive acts of printing and shipping a journal. But today it is absurd to rely solely on the opinions of two or three reviewers, who may or may not be the best qualified to assess a paper, who often did not want to read the paper in the first place, who are acting under intense time pressure, and who are casting judgment at a fixed point in time, to be to sole arbiters of the validity and value of a work. Post-publication peer review of pre-prints is scientific peer review optimized for the Internet Age.

Beginning to experiment with systems for post-publication review now will hasten its development and acceptance, and is the quickest path to the universal posting of pre-prints. In the spirit of experimentation, we propose a possible system below.

A system for post-publication peer review

First, authors would publish un-reviewed papers on pre-print servers that screen them to remove spam and papers that fail to meet technical and ethical specifications, before making them freely available online. At this point peer review begins, proceeding along two parallel tracks.

Track 1: Organized review in which groups, such as scientific societies or self-assembling sets of researchers, representing fields or areas of interest arrange for the review of papers they believe to be relevant to researchers in their field. They could either directly solicit reviewers or invite members of their group to submit reviews, and would publish the results of these reviews in a standardized format. These groups would be evaluated by a coalition of funding agencies, libraries, universities, and other parties according to a set of commonly agreed upon standards, akin to the screening that is done for traditional journals at PubMed.

Track 2: Individually submitted reviews from anyone who has read the paper. These reviews would use the same format as organized reviews, and would, like organized reviews become part of the permanent record of the paper. Ideally, we want everyone who reads a paper carefully to offer their view of its validity, audience, and impact. To ensure that the system is not corrupted, individually submitted reviews would be screened for appropriateness, conflicts of interest, and other problems, and there would be mechanisms to adjudicate complaints about submitted reviews.

Authors would have the ability at any time to respond to reviews and to submit revised versions of their manuscript.

Such a system has many immediate advantages over our current system of pre-publication peer review. The amount of scrutiny a paper receives will scale with the level of interest in the paper. If a paper is read by thousands of people, many more than the three reviewers chosen by a journal are in a position to weigh in on its validity, audience, and importance. Instead of only evaluating papers at a single fixed point in time, the process of peer review would continue for the useful lifespan of the paper.

What about concerns about anonymity for reviewers? We believe that peer review works best when it is completely open and reviewers are identified. This both provides a disincentive to various forms of abuse, and allows readers to put the review in perspective. We also recognize that there are many scientists who would not feel comfortable expressing their honest opinions without the protection of anonymity. We therefore propose that reviews be allowed to remain anonymous provided that one of the groups defined in Track 1 above vouch for their lack of conflict and appropriate expertise. This strikes the right balance between providing anonymity to reviewers while protecting authors from anonymous attacks.

What about the concern of flawed papers being published, or being subject to misuse and misinterpretation while they are being reviewed? We do not consider this to be a serious problem. The people in the best position to make use of immediate access to published papers – practicing scientists in the field of the paper – are in the best position to judge the validity of the work themselves and to share their impressions with others. Readers who want external assessment of the quality of a work can wait until it comes in, and are those no worse off than they are in the current system. If implemented properly, such a system would get the best of both worlds – rapid access for those who want and need it, and quality control over time for a wider audience.

Assessing quality and audience without journal names

The primary reason the traditional journal-based peer review system persists despite its anachronistic nature is that the title of the journal in which a scientific paper appears reflects the reviewers’ assessment of the appropriate audience for the paper and their valuation of its contributions to science. There is obviously value in having people who read papers judge their potential audience and impact, and there are many circumstances where having an external assessment of a scientist’s work can be of use. But there is no reason we have to use journal titles to convey this information.

It would be relatively simple to give reviewers of published pre-prints a set of tools to specify the most appropriate audience for the paper, to anticipate their expected level of interest in the work, and to gauge the impact of the work. We can also take advantage of various automated methods to suggest papers to readers, and for such readers to rate the quality of paper by a set of useful metrics. Systems that use the Internet to harness collective expertise have fundamentally changed nearly every other area human society – it’s time for them to do the same for science.

Actions

A commitment to promoting pre-prints in biomedicine requires a commitment to promoting a new system for reviewing works published initially as un-reviewed pre-prints. Such systems are practical and a dramatic improvement over the current system. We call on funders and other stakeholders to endorse the universal posting of pre-prints and post-publication peer review as inseparable steps that would dramatically improve the way scientists communicate their ideas and discoveries. We recognize that such a system requires standards, and propose that a major outcome of the ASAP Bio meeting be the creation of an “International Peer Review Standards Organization” to work with funders and other stakeholders to establish these criteria and to work through many of the important issues, and then serve as a sanctioning body for groups of reviewers who wish to participate in this system. We are prepared to take the lead in assembling an international group of leading scientist to launch such an organization.

Posted in open access | Comments closed

The current system of scholarly publishing is the real infringement of academic freedom

Rick Anderson has a piece on “Open Access and Academic Freedom” at Inside Higher Ed arguing the open access policies being put into place by many research funders and some universities that require authors to make their work available under open licenses (most commonly Creative Commons’ CC-BY) are a violation of academic freedom and should be viewed with skepticism.

Here is the basic crux of his argument:

The meaningful right that the law provides the copyright holder is the exclusive (though limited) right to say how, whether, and by whom these things may be done with his work by others.

So the question is not whether I can, for example, republish or sell copies of my work under CC BY — of course I can. The question is whether I have any say in whether someone else republishes or sells copies of my work — and under CC BY, I don’t.

This is where it becomes clear that requiring authors to adopt CC BY has a bearing on academic freedom, if we assume that academic freedom includes the right to have some say as to how, where, whether, and by whom one’s work is published. This right is precisely what is lost under CC BY. To respond to the question “should authors be compelled to choose CC BY?” with the answer “authors have nothing to fear from CC BY” or “authors benefit from CC BY” is to avoid answering it. The question is not about whether CC BY does good things; the question is whether authors ought to have the right to choose something other than CC BY.

Although for reasons I outline below I disagree with Anderson’s conclusion that concerns about academic freedom should trump the push for greater access, the point bears some consideration, especially because he is far from the only one raising it.

But what actually is this “academic freedom” we are talking about?  I will admit that, even though I am a long-time academic, and have a general sense of what academic freedom is, when I first started hearing this complaint about open access mandates, I didn’t really understand what the term “academic freedom” actually means. And part of the problem is that there isn’t really a thing called “academic freedom”.

The Wikipedia definition pretty much captures the concept:

Academic freedom is the belief that the freedom of inquiry by faculty members is essential to the mission of the academy as well as the principles of academia, and that scholars should have freedom to teach or communicate ideas or facts (including those that are inconvenient to external political groups or to authorities) without being targeted for repression, job loss, or imprisonment.

But this broad concept lacks a unified concrete reality. Anderson cites as his evidence that CC-BY mandates violate academic freedom the following passage from the widely-cited “1940 Statement of Principles on Academic Freedom and Tenure” from the American Association of University Professors:

Teachers are entitled to full freedom in research and in the publication of the results, subject to the adequate performance of their other academic duties; but research for pecuniary return should be based upon an understanding with the authorities of the institution.

Note that while this document provides a definition of academic freedom that has been fairly widely accepted, it is not in any way legally binding nor, more importantly, does it reflect a universal consensus about what academic freedom is. Nonetheless, it’s hard not to get behind the general principle that academics should have the “freedom to publish”. However, it is by no means clear what this actually entails.

Virtually everything I have ever read about academic freedom starts with the importance of giving academics the freedom to express the results of their scholarship irrespective of their specific conclusions. We grant them tenure in large part to protect this freedom, and I know of no academic who would sanction their employer telling them that they can not publish something they wish to publish.

But imposing a requirement that academics employ a CC-BY license does not impose a restriction on the content of their publication, but rather imposes a limit on venues available for publication (and it’s important for open access supporters to acknowledge this – there exist journals today that would not accept papers that were available online elsewhere, with or without a CC-BY license). But I’m not sure this constitutes a limit on academic freedom?

Clearly some restrictions on venues would have the effect of restricting authors’ ability to communicate their work. If a university told its academics that they could only publish in venues that appeared exclusively in print, they would unambiguously limit their ability to communicate and we would not sanction it. But what if they required that all works be available online to facilitate assessment and access for students? This would also impose some limits on where they could publish, but, in the current online-heavy universe, this would not be a meaningful limit on the authors’ ability to communicate.

So it seems to me that we have to make a choice. Approach 1 would be to evaluate such conditions on a case by case basis to determine if the limitations placed on authors actually limit academic freedom.  Approach 2 would be to enshrine the principle that any conditions placed on how or where academics publish by universities and funders are unacceptable.

If we take the case-by-case approach, we have to ask if the specific requirement that authors make their work available under a CC-BY license constitutes an infringement of their freedom to communicate their work. It certainly imposes some limits on where they can publish, but, given the wide diversity of journals that don’t prohibit pre-prints, it’s hard to describe this as a significant infringement.

The second issue raised by Anderson, that by requiring CC-BY and thereby granting others the right to reuse and republish a work without author permission you are depriving authors of the right to control how their work is used. I am a bit sympathetic to this point of view. But in reality authors have actually already lost an element of this control, as the fair use component of copyright law grants others the right to use published works in certain ways without author permission (to write reviews of the work, for example), so it’s hard to see this as a major intrusion.

Which brings me to one of my main points. Anderson argues that the principle of “freedom to publish” should be sacrosanct. But it clearly is not. While scholars my have the theoretical ability to publish their work wherever they want to, in reality the hiring, promotion, tenure and funding policies of universities and funding agencies impose a major constraint on how and where academics publish. Scientists are expected to publish in certain journals, other academics are expected to publish books with certain publishers. Large parts of the academic enterprise are currently premised on restricting the freedom of academics to publish where and how they want. In comparison to these restrictions – which manifest themselves on a daily basis – the added imposition of requiring a CC-BY license seems insignificant.

Furthermore, one has to view the push for CC-BY licenses in a broader context in which they are part of an effort to alter the ecology of scholarly publishing so that authors are not judged by their publication in a narrow group of journals or with a narrow group of university presses. Thus I would argue that, viewed practically, the shift to CC-BY would actually promote academic freedom and the freedom of authors to publish how and where they want.

One could reasonably respond that it’s not my place to decide on behalf of other scholars what does and does not constitute an imposition of their academic freedom. Which brings us to approach 2, enshrining the principle that any conditions placed on how or where academics publish by universities and funders are unacceptable. If you hold this position then you will clearly view a mandatory CC-BY policy as an unacceptable imposition of academic freedom. But you would then also have to see the hiring, promotion, tenure and funding policies that push authors to certain venues as an even bigger betrayal of academic freedom. I am happy to completely embrace this point of view.

In the end, I didn’t find Anderson’s article as repugnant as many of my open access friends did. Academic freedom is important, and it should be defended. And the points he raised are interesting and important to consider. But I take exception with Anderson’s focus on the supposed negative effects of the use of a CC-BY license on academic freedom, when, if we are serious about defending academic freedom we should instead be looking at how the entire system of scholarly publishing limits it. Indeed, I have now been inspired by Anderson’s article to make academic freedom a major lynchpin of my future arguments in favor of fundamental reform of scholarly publishing.

 

Posted in academic freedom, open access, public access, science | Comments closed

Vegan Thanksgiving Picnic Pie Recipe

I posted some pictures of this Thanksgiving themes picnic pie (completely vegan) on Twitter and Facebook.


IMG_3104

A bunch of people asked me for my recipe. Unfortunately, it was almost completely improvised, so I don’t have a recipe. But here is roughly what I did.

First of all, a few weeks ago I had no idea what a picnic pie is. But then I was randomly channel surfing and came upon a show called “The Great British Bake Off” in which three people were competing in various baking challenges – the final one of which was making a “Picnic Basket Pie” – which is basically a bread pan lined with pastry dough that is filled with layers of various things (meat, cheese, veggies, etc…), baked, and then sliced into slabs that show off the layers.

I liked the concept, and so as I started to think about what to cook for Thanksgiving (as a vegan going to non-vegan houses I’m always forced to cook my own meal) it occurred to me to make a Thanksgiving themes picnic pie with layers like mashed potatoes, stuffing, cranberry sauce, etc…

I started with one of the recipes from the show by the one contestant who made at least a vegetarian pie. The only thing I used was the recipe for the dough, which is basically just normal pastry dough with a bit of baking powder added (not sure why).

Dough

600g (~4 cups) of all purpose flour
3/4 cup (3 sticks) of unsalted margarine or shortening
1/2 tsp salt
1/2 tsp baking powder

Cut the margarine into the dough with fingers, fork or pastry mixer. Add ~150ml of water and form into ball and place into fridge for at least an hour. When ready to form take out of fridge and let sit for 15m to warm up.

Roll out ~2/3 of dough into shape that it will fit into a high sided bread pan (mine is around 8″ x 4″ x 4″). Cut a piece of parchment paper about 6″ wide and long enough to go under the dough in the dish with ends sticking out as handles (you’re going to use this to take the pie out of the dish). Then carefully fit the dough into the pan. Make sure it is intact with no holes.

Fillings

The key thing for each of these layers is that they be relatively dry so that they won’t leak out moisture and ruin the structural integrity of the crust. I mostly made these up on the fly, but here is roughly what I did.

Layers from bottom to top:

Polenta: I started by spreading a layer of dried, uncooked polenta on the bottom. This was to represent traditional Thanksgiving corn, but also to absorb excess moisture. Although I was careful not to have wet layers, I figured there would be enough water to cook the polenta as I baked the pie. But this turned out not to be correct. So if I do this again, I’ll cook the polenta first.

Greens: Sliced a leak and sautéed in olive oil with ~1 Tbs crushed roasted garlic. When done roughly chopped two bunches of swiss chard and added to pan, cooking until wilted. I then pressed as much of the water as I could out of the greens in a strainer. Added on top of polenta.

IMG_3087

Sweet Potatoes: Sliced a large Beauregard yam into ~3/4 slices and then quartered them. Put them into a baking dish with a layer of olive oil. Sprinkled with brown sugar and then baked ~20m at 400F until soft. Added on top of chard trying hard to pack densely.

IMG_3088

Stuffing: Sliced an onion and a stalk of celery. Cooked in olive oil until softened. Added about 2 or 3 cups of sliced brown mushrooms and cooked until soft. I then added bread crumbs until fairly dry. Added salt to taste. Added on top of sweet potatoes.

IMG_3089

Mashed potatoes: Peeled and diced ~6 russet potatoes. Boiled until soft. Mashed with potato masher. Added margarine and salt to taste. Layered on top of stuffing.

IMG_3090

Cranberry sauce: Started with directions on back of bag. Boiled two cups sugar in two cups water. Added two 12oz. bags of cranberries. Simmered on medium for at least an hour (probably more) until berries soft and starting to pop. Crushed them with potato masher. Then strained through fine strainer. Set the flow through aside (this is a good cranberry sauce for kids) and added the now relatively not so wet and somewhat sweetened cranberries.

IMG_3092

Top

Made a lattice top by making four long slices ~1″ wide and then weaving shorter pieces along the short axis. Pinched edges together.

IMG_3095

Baking

Baked 50 minutes at 400F. Let cool for a while. I served it cold, but I think it was better when I reheated it, so if you make this I might try serving it 30m or so after cooking.

Impression

Overall I thought this came out really well. It held together perfectly – didn’t get moisture screwing up the dough. And the flavors went well together. I’m definitely going to make things like this again.

 

Posted in Uncategorized | Comments closed