Vegan Thanksgiving Picnic Pie Recipe

I posted some pictures of this Thanksgiving themes picnic pie (completely vegan) on Twitter and Facebook.


A bunch of people asked me for my recipe. Unfortunately, it was almost completely improvised, so I don’t have a recipe. But here is roughly what I did.

First of all, a few weeks ago I had no idea what a picnic pie is. But then I was randomly channel surfing and came upon a show called “The Great British Bake Off” in which three people were competing in various baking challenges – the final one of which was making a “Picnic Basket Pie” – which is basically a bread pan lined with pastry dough that is filled with layers of various things (meat, cheese, veggies, etc…), baked, and then sliced into slabs that show off the layers.

I liked the concept, and so as I started to think about what to cook for Thanksgiving (as a vegan going to non-vegan houses I’m always forced to cook my own meal) it occurred to me to make a Thanksgiving themes picnic pie with layers like mashed potatoes, stuffing, cranberry sauce, etc…

I started with one of the recipes from the show by the one contestant who made at least a vegetarian pie. The only thing I used was the recipe for the dough, which is basically just normal pastry dough with a bit of baking powder added (not sure why).


600g (~4 cups) of all purpose flour
3/4 cup (3 sticks) of unsalted margarine or shortening
1/2 tsp salt
1/2 tsp baking powder

Cut the margarine into the dough with fingers, fork or pastry mixer. Add ~150ml of water and form into ball and place into fridge for at least an hour. When ready to form take out of fridge and let sit for 15m to warm up.

Roll out ~2/3 of dough into shape that it will fit into a high sided bread pan (mine is around 8″ x 4″ x 4″). Cut a piece of parchment paper about 6″ wide and long enough to go under the dough in the dish with ends sticking out as handles (you’re going to use this to take the pie out of the dish). Then carefully fit the dough into the pan. Make sure it is intact with no holes.


The key thing for each of these layers is that they be relatively dry so that they won’t leak out moisture and ruin the structural integrity of the crust. I mostly made these up on the fly, but here is roughly what I did.

Layers from bottom to top:

Polenta: I started by spreading a layer of dried, uncooked polenta on the bottom. This was to represent traditional Thanksgiving corn, but also to absorb excess moisture. Although I was careful not to have wet layers, I figured there would be enough water to cook the polenta as I baked the pie. But this turned out not to be correct. So if I do this again, I’ll cook the polenta first.

Greens: Sliced a leak and sautéed in olive oil with ~1 Tbs crushed roasted garlic. When done roughly chopped two bunches of swiss chard and added to pan, cooking until wilted. I then pressed as much of the water as I could out of the greens in a strainer. Added on top of polenta.


Sweet Potatoes: Sliced a large Beauregard yam into ~3/4 slices and then quartered them. Put them into a baking dish with a layer of olive oil. Sprinkled with brown sugar and then baked ~20m at 400F until soft. Added on top of chard trying hard to pack densely.


Stuffing: Sliced an onion and a stalk of celery. Cooked in olive oil until softened. Added about 2 or 3 cups of sliced brown mushrooms and cooked until soft. I then added bread crumbs until fairly dry. Added salt to taste. Added on top of sweet potatoes.


Mashed potatoes: Peeled and diced ~6 russet potatoes. Boiled until soft. Mashed with potato masher. Added margarine and salt to taste. Layered on top of stuffing.


Cranberry sauce: Started with directions on back of bag. Boiled two cups sugar in two cups water. Added two 12oz. bags of cranberries. Simmered on medium for at least an hour (probably more) until berries soft and starting to pop. Crushed them with potato masher. Then strained through fine strainer. Set the flow through aside (this is a good cranberry sauce for kids) and added the now relatively not so wet and somewhat sweetened cranberries.



Made a lattice top by making four long slices ~1″ wide and then weaving shorter pieces along the short axis. Pinched edges together.



Baked 50 minutes at 400F. Let cool for a while. I served it cold, but I think it was better when I reheated it, so if you make this I might try serving it 30m or so after cooking.


Overall I thought this came out really well. It held together perfectly – didn’t get moisture screwing up the dough. And the flavors went well together. I’m definitely going to make things like this again.


Posted in Uncategorized | 1 Response

You Have Died Of Peer Review




I’ve been feeling the need for some new publishing related t-shirts, and somehow this idea popped into my head.


You Have Died of Peer Review

For those of you who don’t know, it’s based on the popular 80’s computer game Oregon Trail, where games would often end with the alert that “You Have Died Of Dysentery”

I made it into t-shirts, as one does, which you can get here.

You Have Died Of Peer Review

Posted in Uncategorized | Comments closed

The New York Times’ serial open access slimer Gina Kolata has a clear conflict of interest

Yesterday the Gina Kolata published a story in the New York Times about the fact that many clinical studies are not published. This is a serious problem and it’s a good thing that it is being brought to light.

But her article contains a weird section in which a researcher at the University of Florida explains why she hadn’t published the results of one of her studies:

Rhonda Cooper-DeHoff, for example, an assistant professor of pharmacotherapy and translational research at the University of Florida, tried to publish the results of her study, which she completed in 2009. She wrote a paper and sent it to three journals, all of which summarily rejected it, she said.

The study, involving just two dozen people, asked if various high blood pressure drugs worsened sugar metabolism in people at high risk of diabetes.

“It was a small study and our hypothesis was not proven,” Dr. Cooper-DeHoff said. “That’s like three strikes against me for publication.” Her only option, she reasoned, would be to turn to an open-access journal that charges authors to publish. “They are superexpensive and accept everything,” she said. Last year she decided to post her results on

Why is that sentence in there? First, it’s completely false. There are superexpensive open access journals, and there are open access journals that accept everything. But I don’t know of any open access journal that does both, and neither statement applies to the journals  (from PLOSBMC, Frontiers, eLife and others) that publish most open access papers.

Is the point of that sentence supposed to be that there are journals that will publish anything, including a massively underpowered clinical study, but they’re too expensive to publish in? That would fit the narrative Kolata is trying to develop – that people don’t publish negative results because it’s too hard to – but this too is completely false. Compared to the cost of doing a clinical trial, even a small one, the article processing fees for most open access journals are modest, and most offer waivers to those who can not pay.

It may seem like a minor thing, but these kind of things matter. There are a lot of misconceptions about open access publishing among scientists and the public, and when the paper of record repeats these misconceptions it compounds the problem.

So why does something like this get into the paper? I assume the quoted researcher said that, or something like it. But newspapers aren’t just supposed to let people they quote say things that are patently false without pointing that out.


Kolata has been covering science for covering science for all of the 15 years that open access publishing has been around, and used to work for Science magazine. So it’s just simply not credible to believe that she thinks this assertion about open access is true. Instead, it sure looks like she quoted a source making a false and misleading statement about open access to stand without countering it because it fit her narrative of people not being able to publish their findings.


So, after reading this article I made a few Tweets about this, and would have let it go at that. But then I remembered something. A few years ago, Kolata published a story about  “predatory open access publishers”, in which Kolata characterized such publishers as the “dark side of open access”.

I wrote about this story at the time, and won’t repeat myself here, but suffice it to say that her article went out of its way to condemn all open access publishing because of some bad actors working at its fringes, while ignoring the far more significant sins of subscription publishing.

Sensing a bit of a pattern, I searched to see if she’d ever written other things about open access, and came upon a 2010 article on Amy Bishop, the Alabama who scientist who shot and killed three of her colleagues at the University of Alabama Huntsville, contains this bizarre paragraph on open access:

One 2009 paper, was published in The International Journal of General Medicine. Its publisher, Dovepress, says it specializes in “open access peer-reviewed journals.” On its Web site, the company says, “Dove will publish your paper if it is deemed of interest to someone, therefore your chance of having your paper accepted is very high (it would be a very unusual paper that wasn’t of interest to someone).”

What is the point of bringing open access into a story about whether a murderer did good science? Did Kolata go through her published papers and evaluate each of the journals in which it was published and offer up some kind of synthesis? No. She cherry picked a single article published in an open access journal and, instead of criticizing the science, she made it about the journal and its method of publication. This paragraph seems to be there just to knock open access publishing and to associate publishing in open access journals with being a murderer!

If it was just once, or maybe even twice, I’d just chalk it up to bad reporting or writing. But three separate gratuitous attacks on open access seems like more than a coincidence for someone who has had such a long and distinguished career around science.

It wouldn’t be the first time that members of the science establishment (and the science section of the New York Times is amongst the biggest bulwarks of the science establishment) have taken pot shots at open access and open access journals. But I was curious why Kolata seems to make such a habit of it, and so I went back to her Wikipedia page to find out when she had worked at Science, to see if maybe she had been poisoned by their long history of anti open-access rhetoric. Turns out is was 1973-1987, before open access came along.

But I noticed the following line in her biography:

Her husband, William G. Kolata, has taught mathematics and served as the technical director of the non-profit Society for Industrial and Applied Mathematics in Philadelphia, a nonprofit professional society for mathematicians.

SIAM, it so happens, is a fairly big publisher, with, according to their IRS Form 990, annual subscription revenues of around $6,000,000 (and another $1,000,000 in membership dues, which, for many societies, are often just another way to subscribe to a journal). Now as publishers go, SIAM hasn’t been particularly anti open-access, and their journals engage in so-called “hybrid” open access in which they’ll let you pay an extra fee to make articles freely available (enabling the publisher to double dip by collecting both open access fees and subscriptions, since only a small number of authors choose the open access option).

But given that the ~$125,000 per year that Kolata’s husband makes from SIAM is threatened by changes to scholarly publishing, including open access, it would seem that Kolata has at least a mild conflict of interest here in trying to prop up the subscription publishing industry and in denigrating new models and new players in the industry.

At the very least the fact that, in addition to her own lengthy career in science publishing and science journalism, Kolata’s husband has been involved in running a scientific society that is primarily involved in publishing, makes it seem highly unlikely that her digs at open access are born of ignorance. And whether her motivation is to prop up the dying industry in which her husband just happens to be employed, or if she’s just on some kind of weird petty vendetta, we should watch carefully when Kolata writes about open access in the future and not let her get away this kind of sliming any more.

Posted in open access | Comments closed

What Geoffrey Marcy did was abominable; What Berkeley didn’t do was worse

I am so disappointed and revolted with my university.

On Friday,  posted a story about Geoffrey Marcy, a high-profile professor in UC Berkeley’s astronomy department. It reported on a a complaint filed by four women to Berkeley’s Office for the Prevention of Harassment and Discrimination (OPHD) that alleged that Marcy “repeatedly engaged in inappropriate physical behavior with students, including unwanted massages, kisses, and groping.”

Unusually for this type of investigation, the results of which are usually kept secret, Ghorayshi’s reporting revealed that OPHD found Marcy guilty of these charges, leading to his issuing a public apology in which he, in all too typical PR driven apology speak, acknowledges doing things that “unintentionally” was “a source of distress for any of my women colleagues”.

There’s not much to say about his actions except to say that they are despicable, predatory, destructive and all too typical. It defies even the most extreme sense of credulity to believe that he thought what he was doing was appropriate.

But, unlike so many other cases of alleged harassment that go unreported, or end in a haze of accusations and denials, the system worked in this case. An investigation was carried out, the charges were substantiated, the bravery of the women who came forward was vindicated, and Marcy was removed from the position of authority he had been abusing.

WAIT WHAT? He got a firm talking to and promised never to do it again????? THAT’S IT???

It is simply incomprehensible that Marcy was not sanctioned in any way and that, were it not for Ghorayshi’s work we wouldn’t even know anything about this. How on Earth can this be true? Does the university not realize they are giving other people in a position of power a license to engage in harassment and abusive behavior? Do they think that the threat of having to say “oops, I won’t do that again” is going to stop anyone? Do they think anyone is going to file complaints about sexual harassment or abuse and go through what everyone described as an awful, awful process, so that their abuser will get a faint slap on the wrist? Do they care at all?

Sadly, I think the answer to the last question is “No”.

As I was absorbing this, I was reflecting on having just completed the state mandated two hour online course on sexual harassment. First of all, Marcy is required to have taken this course. If he had paid any attention (and didn’t have someone else take it for him), he would have no excuse for not being aware of how inappropriate and awful his actions were.

But I also realized something more fundamental –  at no point during all the scenarios with goofily named participants, flowcharts of reporting procedures and discussions of legal requirements was there anything about sanctions.

When you study to get a drivers license, the learn not just about the laws of the road, but about what happens if you violate them. And while most of us want to drive safely, it is the threat of sanctions that prevents us from speeding, running red lights and the such. Why no discussion of sanctions regarding actions that are not just violations of university policy, but are, in many cases, crimes?

I am all in favor of education about sexual harassment. But isn’t the fact that this kind of shit keeps happening over and over evidence that education is not enough? There HAVE to be consequences – serious consequences – for abusing positions of power. Do we honestly think that someone who likes to stick his hand up the shirts of his students and give them back rubs is going to be dissuaded from doing so because he (yes, it’s pretty much always he) is going to go back over the “Determining whether conduct is welcome” checklist in his mind? Do we think someone who wants to inappropriately touch students at dinner is going to stop because of some scenario he clicked through?

I’m not trying to argue against this kind of eduction. It is vital. But it is mostly aimed at helping people recognize harassment as a third party. It seems aimed more at supervisors to teach them how to respond to harassment in their midst, and it seems more interested in parsing marginal cases than in saying “DON’T TOUCH YOUR STUDENTS’ and ‘DON’T ABUSE YOUR POSITION OF POWER’.

Here is a perfect example:

Dr. Risktaker

I’m sure male faculty all imagine themselves as the debonair professor who poor female students can’t help having the hots for. But it’s bullshit. The case we have to worry about is exactly the opposite – the one we know happens all the time – where “Randy Risktaker” has the hots for “Suzie Scholar” and uses his position of power over her to impose himself on her.

[And can we talk about names here for a second? Randy Risktaker and Suzie Scholar seem straight out of porn. Is that really the message we want to be sending here? Don’t you think the Geoffrey Marcys of the world read that and go — ooh, I AM a randy risktaker…]

And how does the university respond to this scenario?

Dumb Answers

First, they want to remind us that students CAN harass professors, creating a bizarre false equivalence and ignoring the obvious difference in position and power. Second, and far more importantly, they don’t say what they should say which is HEY DR. RISKTAKER, KEEP IT IN YOUR PANTS AND GO BACK TO TEACHING.

Instead they all but give him permission to pursue the relationship, and give him a step-by-step guide of how to do it: call the sexual harassment officer to discuss the matter (right, like anyone’s going to do that) and then tell her you can no longer be her dissertation advisor anymore because you’d rather sleep with her than advise her academically. I’m sure Geoff Marcy Randy Risktaker is grateful for the guidance.

This isn’t education. This is repulsive.

I get it, university policy does not preclude relationships between faculty and students, it just defines the conditions under which they can happen. But the purpose of training should be to PREVENT HARASSMENT, not to tell people how to comply with university policies.

Which gets to the heart of the matter. The university does not  care about preventing harassment – it cares about covering its ass when harassment occurs. This training – the only real communication faculty get about the matter – is ALL about that. And this has to change. NOW.

All over Berkeley campus there are banners with various people – students, teachers, administrators – saying “It’s on me” to prevent sexual violence on campus and the rape culture that plagues universities everywhere.

Well the behavior Marcy engaged in is sexual violence. And, as a senior university faculty, it’s on me to demand that the university fix this problem immediately.

I am calling on Chancellor Dirks to completely revamp the training faculty and other supervisors receive on sexual harassment to focus primarily on the rampant unacceptable behavior that happens all the time, and to make it unambiguously clear that if faculty engage in this behavior they will receive serious sanctions, including the loss of their position. This is what we owe to the brave women who confronted Marcy, and to tall the people who we can protect from abuse if we act now.

Posted in Uncategorized | Tagged | Comments closed

The Mission Bay Manifesto on Science Publishing

Earlier this week I gave a seminar at UCSF. In addition to my usual scientific spiel, I decided to end my talk with a proposal to UCSF faculty for action that could take make scholarly communication better. This is something I used to do a lot, but have mostly stopped doing since my entreaties rarely produce tangible actions. But I thought this time might be different. I was optimistic that recent attention given by prominent UCSF professor Ron Vale to the pervasive negative effects of our current publishing system might have made my UCSF faculty colleagues open to actually doing something to fix these problems.

So I decided to issue a kind of challenge to them to not just take steps on their own, but to agree collectively to take them together. My motivation for this particular tactic is that when I ask individual scientists to do things differently, they almost always respond that they would love to do things differently, but can’t because the current system requires that {they | their trainees | their collaborators} have to publish in {insert high profile journal here} in order to get {jobs | grants | tenure}. However, in theory at least, this reluctance to “unilaterally disarm” would go away if a large number of faculty, especially at a high-profile place like UCSF agreed to take a series of steps together. I focused on faculty – tenured faculty in particular – because I agree that all too often publishing reform efforts focus on young scientists, who, while they tend to be more open to new things, also are in the riskiest positions with respect to jobs, etc…

My goal was to address in one fell swoop three different, but related issues:

  1. Access. Too many people who need or want access to the scientific and medical literature don’t have it, and this is ridiculous. Scientists have the power to change this immediately by posting everything they write online for free, and by working to ensure that nothing they produce ever ends up behind paywalls.
  2. Impact Factors. The use of journal title and impact factors as surrogate for the quality of science and scientists. Virtually everyone admits that journal title is a poor indicator of scientific rigor, quality or importance, yet it is widely used to judge people in science.
  3. Peer-review. Our system of pre-publication peer-review is slow, intrusive, ineffective and extremely expensive.

And here is what I proposed (it’s named after the Mission Bay campus where I gave my talk):

The Mission Bay Manifesto

As a scientists privileged to work at UCSF we solemnly pledge to fix for future generations the current system of science communication and assessment which does not serve the interests of science or the public by committing to the following actions:

(1) We will make everything we write immediate freely available as soon as it is finished using “preprint” servers like,, or the equivalent. 

(2) No paper we write, or data or tools we produce, will ever, for even one second, be placed behind a paywall where they are inaccessible to even one scientists, teacher, student, health care provider, patient or interested member of the public. 

(3) We will never refer to journal titles when discussing my work in talks, on my CV, in job or grant application, or any other context. We will provide only a title, a list of authors and publicly available link for all of my papers on CVs, job and grant applications.

(4) We will evaluate the work of other scientists based exclusively on the quality of their work, not on where they have published it. We will never refer to journal titles or use journal titles as a proxy for quality when evaluating the work of other scientists in any context.

(5) We will abandon the slow, cumbersome and distorting practice of pre-publication peer review and exclusively engage in open post-publication peer review as an author and reviewer (e.g. as practiced by journals like F1000 Research, The Winnower and others, or review sites like PubPeer). 

(6) We will join with my colleagues and collectively make our stance on these issues public, and will follow this pledge without fail so that our students, postdocs and other trainees who are still building their careers do not suffer while we work to fix a broken system we have created and allowed to fester.

I am positive that IF the faculty at UCSF agreed to all these steps, science publishing would change overnight – for the better. But, alas, while I’d love to say the response was enthusiastic, it was anything but. Some polite nodding, but more the kind you give to a crazy person talking to you on the bus than one of genuine agreement. People raised specific objections (#5 was the one they are least in favor of), but seemed willing to take even a marginal risk, or to inconvenience themselves, to fix the system. And if we can’t get leadership from tenured faculty at UCSF, is it any wonder that other people in less secure positions are unwilling to do anything. I went back to Berkeley disappointed and disheartened. And then yesterday I heard a great seminar from a scientist from a major university on the East coast whose work I really love talk over and over about Nature papers in their seminar.

But my malaise was short lived. Maybe I’m crazy, but, even if we haven’t figure it out, I know there’s a way to break through the apathy. So, I’ll do the only thing I can do – commit myself to following my own manifesto. And ask as many of you who can see your way to joining me to do so publicly. If UCSF faculty don’t want to lead, we can instead.


Posted in EisenLab, open access | Comments closed


A couple of weeks ago I unintentionally set off a bit of a firestorm regarding Wikipedia, Elsevier and open access. I was scanning my Twitter feed, as one does, and came upon a link to an Elsevier press release:

Elsevier access donations help Wikipedia editors improve science articles: With free access to ScienceDirect, top editors can ensure that science read by the public is accurate

I read the rest of it, and found that Elsevier and Wikipedia (through the Wikipedia Library Access Program) had struck a deal whereby 45 top (i.e. highly active) Wikipedia editors would get free access to Elsevier’s database of science papers – Science Direct – for a year, thereby “improving the encyclopedia and bringing the best quality information to the public.”

I have some substantive issues with this arrangement, as I will detail below. But what really stuck in my craw was the way that several members of the Wikipedia Library were used not just to highlight the benefits of the deal to Wikipedia and its users, but to serve as mouthpieces for misleading Elsevier PR, such as this:

Elsevier publishes some of the best science scholarship in the world, and our globally located volunteers often seek out that access but don’t have access to research libraries. Elsevier is helping us bridge that gap!

It was painful to hear people from Wikipedia suggesting that Elsevier is coming to the rescue of people who don’t have access to the scientific literature! In reality, Elsevier is one of the primary reasons they don’t have access, having fought open access tooth and nail for two decades and spent millions of dollars to lobby against almost any act anywhere that would improve public access to science. And yet here was Wikipedia – a group that IS one of the great heroes of the access revolution – publicly praising Elsevier for providing access to 0.0000006% of the world’s population.

Furthermore, I found the whole idea that this is a “donation” is ridiculous. Elsevier is giving away something that costs them nothing to provide – they just have to create 45 accounts. It’s extremely unlikely that the Wikipedia editors in question were potential subscribers to Elsevier journals or that they would pay to access individual articles. So no revenue was lost. And in exchange for giving away nothing, Elsevier almost certainly increases the number of links from Wikipedia to their papers – something of significant value to them.

I was fairly astonished to see this, and, being somewhat short-tempered, I fired off a series of tweets:

These tweets struck a bit of a nerve, and the reaction, at least temporarily, seemed to pit #openaccess advocates against Wikipedians – as highlighted in a story by Glyn Moody. I in no way meant to do this. It would be hard to find two groups whose goals are more aligned.

So I want to reiterate something I said over and over as these tweets turned into a kind of mini-controversy. In saying I thought that making this deal with Elsevier was a bad idea, I was not in any way trying to criticize Wikipedia or the people who make it work. I love Wikipedia. As a kid who spent hours and hours reading an old encyclopedia my grandparents gave me, I think that Wikipedia is one of the greatest creations of the Internet Age. Its editors and contributors, as well as Jimmy Wales and the many others who made it a reality, are absolute, unvarnished heroes.

In no way do I question the commitment of Wikipedia to open access. I just think they made a mistake here, and I worry about a bit about the impact this kind of deal will have on Wikipedia. But it is a concern born of true love for the institution.

So with that in mind, let me delve into this a bit more deeply.

First of all, I understand completely why Wikipedia make this kind of deal. The mission of Wikimedia is to “empower and engage people around the world to collect and develop educational content under a free license or in the public domain, and to disseminate it effectively and globally” [1]. But there is a major challenge to building an accurate and fully-referenced open encyclopedia: much of the source material they need to do this is either not online or is behind paywalls. It’s clear that Wikipedia sees opening source material as the long-term solution to this problem. But in the meantime they feel compelled to ensure that the people who build Wikipedia have a way around paywalls when they are doing so. It’s not all that conceptually different from a university library that works to provide access to paywalled sources to its scholars.

So the question to me isn’t whether Wikipedia should make any deals with publishers. The question is should they have made this deal with this publisher. And just like I have strongly disagreed with deals universities (including my own) routinely make to provide campus access to Elsevier journals, I do not think this deal is good for Wikipedia or the public.

Here are my concerns:

This deal will prolong the life of the paywalled business model

If the only effect of this deal was to provide editors with access, I would hold my nose and support Wikipedia’s efforts to work around the current insane scholarly publishing system. But I don’t think this is the only effect of the deal. In several ways this deal strengthens Elsevier’s subscription publishing business, and strengthening this business is clearly bad for Wikipedia and its mission.

How does it strengthen Elsevier’s business? First, it provides them with good PR – allowing them to pretend that they support openness, something that serves to at least partially blunt the increasingly bad PR their business subscription journal publishing business has incurred in recent years. Second, it provides them with revenue. This deal will increase the number of links in Wikipedia to Elsevier papers, and links on Wikipedia are clearly of great value to Elsevier – they can monetize them in multiple ways: a) by advertising on the landing pages, b) by collecting one-time fees from people without accounts who want to view an article, and, most significantly, c) by increasing traffic to their journals from users with access, which they cite to justify increased payments from universities and other institutions.

Finally, and most significantly, the deal mitigates some of the direct negative consequences of publishing paywalled journals and publishing in paywalled journals. One of the consequences of papers appearing in paywalled journals is that they are less likely to be cited and otherwise used on the Internet and beyond. And, as open resources like Wikipedia grow and grow in importance, this will become more true. This is a potentially powerful force for driving people to publish in a more open way, and, if anything, supporters of openness should be working to amplify this effect. But this deal does the opposite – it significantly dilutes the negative impacts of publishing in Elsevier’s paywalled journals, and thereby almost certainly will help prolong the life of the paywalled journal business model.

I realize that not making this deal would weaken Wikipedia in the short-run. But I am certain it would strengthen it in the long-run by quickening the arrival of a truly open scientific literature, and I think we are all in this for the long-run.

Wikipedia got too little from Elsevier

Even if you accept that this kind of deal has to be made, I think it’s a bad deal. Elsevier got great PR, significant tangible financial benefits, and several clear intangible benefits. An exchange for this, they’ve given away almost nothing. To me this was a missed opportunity related to the framing of this as a “donation”. If you’re asking for a donation, you don’t make demands. But it seems like Wikipedia was in a good position to ask for something that would benefit its readers in a much bigger way, such as Elsevier letting everyone through their paywall when following links from Wikipedia.

I obviously can’t guarantee Elsevier would have agreed to this, and maybe Wikipedia tried to negotiate for more, but it does strike me that Wikipedia undervalued itself with this arrangement.

Will this effect how articles are linked from Wikipedia?

One of the many things I love about Wikipedia is that there is a clear bias in favor of sources that are available for free online to everyone. This is obviously part philosophical – people who put the most time into building Wikipedia are obviously true believers in openness and almost certainly are biased in favor of providing open sources whenever possible. But some of this is also practical. Almost by definition if you can not access a source, you are unlikely (and should not) cite it. You can see this effect clearly in academic scientists who have only a weak bias towards citing open sources because they have access to most papers and don’t think about access when choosing what to cite. I don’t question the commitment of Wikipedians to openness. There are plenty of cases where people cite freely available versions of papers (e.g. preprints) instead of official paywalled versions. I just worry that easy access to paywalled papers will increase the number of times the paywalled version is cited in lieu of others (like free copies in PubMed Central). Obviously, there are ways to mitigate this – bots that check citations and add open ones. But it warrants watching.

And I’m not in any way suggesting that people should systematically reject citing paywalled sources. Sometimes information is fungible – there are many sources that one could cite for a particular fact – but this is obviously not always the case. Clearly for Wikipedia to be successful in the current environment, it has to be based on, and cite, a lot of paywalled sources.

Science journal articles are not like books

Several people have made the comparison between book citations and journal articles. But there are crucial differences. First, there is a real viable alternative to paywalled journals right now, and I would argue that it is in Wikipedia’s interest to support that alternative by not making things too easy for paywalled journals. Unfortunately, the same is not true for books, even academic ones. But even with the generally poor accessibility of books, I wonder if Wikipedians would support a deal with Amazon in which prolific edits got Kindle’s with free access to all Amazon e-books in exchange for providing links to Amazon when the books were cited (this was suggested by someone on Twitter but I can’t find the link)? I doubt it, yet to me this is almost exactly analogous to this Elsevier deal. In any case, the main point is that the situation with books is really bad, but that isn’t a good reason not to make the situation for journal articles better.

Wikipedia rocks

All that said, I hope this issue is behind us. It was painful to see myself being portrayed as a critic of Wikipedia. I am not. I could not love Wikipedia more than I do. I use it every day. It is one of the best advertisements for openness out there, and I can even see an argument that says that if deals with the devil make Wikipedia better, then this benefits openness far more than it hurts it. So let’s just leave it at that. I’ve enjoyed all the conversation about this issue, and I look forward to doing anything I can to make Wikipedia better and better in the future.

Posted in open access, Wikipedia | Comments closed

Thoughts on Ron Vale’s ‘Accelerating Scientific Publication in Biology’

Ron Vale has posted a really interesting piece on BioRxiv arguing for changes in scientific publishing. The piece is part data analysis, examining differences in publishing in several journals and among UCSF graduate students from 1980 to today, and part perspective, calling for the adoption of a culture of “pre-prints” in biology, and the expanded use of short-format research articles.

He starts with three observations:

  • Growth in the number of scientists has increased competition for spots in high-profile journals over time, and has led these journals to demand more and more “mature” stories from authors.
  • The increased importance of these journals in shaping careers leads authors to try to meet these demands.
  • The desire of authors to produce more mature stories has increased the time spent in graduate and postdoctoral training, and has diminished the efficacy of this training, while slowing the spread of new ideas and data.

He offers up some data to support these observations:

  • Biology papers published in Cell, Nature and JCB in 2014 had considerably more data (measured by counting the number of figure panels they have) than in 1984.
  • Over the same period, the average time to first publication for UCSF graduate students has increased from 4.7 years to 6.0 years, the number of first author papers they have has decreased, and the total time they spend in graduate school has increased.

And he concludes by offering some solutions:

  • Encourage scientists to publish all of their papers in pre-print servers.
  • Create a “key findings” form of publication that would allow for the publication of single pieces of data.

Vale has put his finger on an important problem. The process of publication has far too great an influence on the way we do science, let alone communicate it. And it would be great if we all used preprint servers and strived to publish work faster and in a less mature form than we currently do. I am very, very supportive of Vale’s quest (indeed it has been mine for the past twenty years) – if it is successful, the benefits to science and society would be immense.

However, in the spirit of the free and open discussion of ideas that Vale hopes to rekindle, I should say that I didn’t completely buy the specific arguments and conclusions of this paper.

My first issue is that the essay misdiagnoses the problem. Yes, it is bad that we require too much data in papers, and that this slows down the communication of science and the progress of people’s careers. But this is a symptom of something more fundamental – the wildly disproportionate value we place on the title of the journal in which papers are published rather than on the quality of the data or its ultimate impact.

If you fixed this deeper problem by eliminating journals entirely and moving to a system of post-publication review, it would remove the perverse incentives that produce the effects Vale describes. However Vale proposes a far more modest solution – the use of pre-print servers. The odd thing with this proposal, as Vale admits, is that pre-print servers don’t actually solve the problem of needing a lot of data to get something published. It would be great for all sorts of reasons if every paper were made freely available online as early as possible – and I strongly support the push for the use of pre-print servers. But Vale’s proposal seem to assume that existing journal hierarchy would remain in place, and that most papers would ultimately be published in a journal. And this wouldn’t fundamentally alter the set of incentives to journals and authors that has led to problems Vale writes about. To do that you have to strip journals of the power to judge who is doing well in science – not just have them render that decision after articles are posted in a pre-print server. Unless the rules of the game are changed, with hiring, funding and promotion committees looking at quality instead of citation, universal adoption of pre-print servers will both be harder to achieve, and will have a limited effect on the culture of publishing.

Indeed, I would argue that we don’t need “pre-print” servers. What we need is to treat the act of posting your paper online in some kind of centralized server as the primary act of publication. Then it can be reviewed for technical merit, interest and importance starting at the moment it is “published” and continuing for as long as people find the paper worth reading.

Giving people credit for the impact their work has over the long-term would encourage them to publish important data quickly, and to fill in the story over time, rather than wait for a single “mature” paper. Similarly, rather than somewhat artificially create a new type of paper to publish “key findings” I think people will naturally write the kind of paper Vale wants if we change the incentives around publication by destroying the whole notion of “high-impact publications” and the toxic glamour culture that surrounds it.

Another concern I have about Vale’s essay is that he bases his argument for pre-print servers on a set of data analyses that, while I found them interesting, I didn’t find them compelling. I think I get what Vale’s doing. He wants to promote the use of pre-print servers, and realizes that there is a lot of resistance. So he is trying to provide data that will convince people that there are real problems in science publishing so that they will endorse his proposals. But by basing calls for change on data, there is the real risk that other people will also find the data less than compelling and will dismiss the Vale’s proposed solutions as unnecessary as a result, when in fact the things Vale proposes would be just as valuable even if all the data trends he cites weren’t true

So let’s delve into the data a bit. First, in an effort to test the widely held sentiment that the amount of data required for a paper has increased over time, he attempted to compare the amount of data contained in papers published in Cell, Nature and JCB during the first six months of 1984 and of 2014 (it’s not clear why he chose these three journals).

The first interesting observation is that the number of biology papers published in Nature has dropped slightly over thirty years, and the number of papers published in JCB has dropped in half (presumably as the result of increased competition from other journals). To quantify the amount of data a paper contained, Vale analyzed figures in each of the papers. The total number of figures per paper was largely unchanged (a product, he argues, of journal policies), but the number of subpanels in each figure went up dramatically – two to four-fold.

I am inclined to agree with him, but it is worth noting that there are several alternative explanations for these observations.

As Vale acknowledges, practices in data presentation could have changed, with things that used to be listed as “data not shown” may now be presented in figures. I would add that maybe the increase in figure complexity reflects the fact that it is far easier to make complex figures now than it was in 1984. For example, when I did my graduate work in the early 1990’s it was very difficult to make figures showing aspects of protein structure. Now it is simple. Authors may simply be more inclined to make relatively minor points in a figure panel now because it’s easier.

A glance at any of these journals will also tell you that the complexity of figures varies a lot from field to field. Developmental biologists, for example, seem to love figures with ten or twenty subpanels. Maybe Cell, Nature and JCB are simply publishing more papers from fields where authors are inclined to use more complex figures.

Finally, the real issue Vale is addressing is not exactly the amount of data included in a paper, but rather the amount of data that had to be collected to get to the point of publishing a paper. It’s possible that authors don’t actually spend more time collecting data, but that they used to leave more data “in the drawer”.

The real point is that it’s really hard to answer the question of whether papers now contain more data than they used to. And it’s even harder to determine whether the amount of data required to get a paper published is more of less of an obstacle now than it was thirty years ago.

I understand why Vale did this analysis. His push to reform science publishing is based on a hypothesis – that the amount of data required to publish a paper has increased over time – and, as a good scientist, he didn’t want to leave this hypothesis untested. However, I would argue that differences between 1984 and today are irrelevant. Making it easier to publish work, and giving people incentives to publish their ideas and data earlier, is simply a good idea – and would be equally good even if papers published in 1984 required more data than they do today.

Vale goes on to speculate about why papers today require more data, and chalks it up primarily to the increased size of the biomedical research community, which has increased competition for coveted slots in high-ranking journals while it has also increased the desire for such publications, and that this has allowed journals to be even more selective and to put more demands on authors. (It’s really quite interesting that the number of papers in Cell, Nature and (I assume) Science has not increased in 30 years even as the community has grown).

This certainly seems plausible, but I wonder if it’s really true. I wonder if, instead, the increase in expectations of “mature” work have to do with the maturation of the fields in question. Nature has pretty broad coverage in biology (although it’s coverage is by no means uniform), but Cell and JCB both represent fields (molecular biology and cell biology) that were kind of in their infancies, or at least early adolescences, 30 years ago. And as fields mature, it seems quite natural for papers to include more data, and for journals to have higher expectations for what constitutes an important advance. You can see this happening over much shorter timeframes. Papers on the microbiome for example used to contain very little experimental data – often a few observations about the microbial diversity of some niche – but within just a few years, expectations for papers in the field have changed, with the papers getting far more data-dense. It would be interesting to repeat the kind of analysis Vale did, but to try and identify “new” fields (whatever that means), and see whether fields that were “new” in 2014 have papers of similar complexity to “new” fields in 1984.

The second bit of data Vale produced is on the relationship between publications and the amount of time spent in graduate school. Using data from UCSF’s graduate program, he found that current graduate students “published fewer first/second author papers and published much less frequently in the three most prestigious journals.” The average time to a first author papers for UCSF students in the 80’s was 4.7 years, and now it is 6.0. And the number of students with Science, Nature or Cell papers has fallen in half.

Again, one could pick this analysis apart a bit. Even if you accept the bogus notion that SNC publications are some kind of measure of quality, there are more graduate students both in the US and elsewhere, but the number of slots in those journals has remained steady. Even if criteria for publication were unchanged over time, one would have expected the number of SNC papers for UCSF graduate students to have gone down simply because of increased competition. If SNC papers are what these students aspire to (which is probably sadly largely true) then it makes sense that they would spend more time trying to make better papers that will get into these journals. It’s not clear to me that this requires that papers have more data, but rather than they have better data. But either way, once could look at this and argue that the problem isn’t that we need new ways of publishing, but rather that we need to stop encouraging students to put their papers into SNC. I suspect that all of the trends Vale measures here would be reversed if UCSF faculty encouraged all of their graduate students to publish all of their papers in PLOS ONE.

One could also argue that the trends reflect not a shift in publishing, but rather a degradation in the way we train graduate students. In my experience most graduate student papers reflect data that was collected in the year preceding publication. Maybe UCSF faculty, distracted perhaps by grant writing, aren’t getting students to the point where they do the important, incisive experiments that lead to publication until their fifth year, instead of their fourth.

And again, while the increased time to first publication has increased dramatically in the last 30 years, it’s hard to point to 1984 as some kind of Golden Age. That typical students back then weren’t publishing at all until the end of their fifth year in graduate school is still bad.

So, in conclusion, I think there is a lot to like in this essay. Without explicitly making this point, the observations, data and discussion Vale present make a compelling case that publishing is having a negative impact on the way we do science and the way we train the next generation. I have some issues with the way he has framed the argument and the degree of conservativeness in his solutions. But I think Vale has made an important contribution to the now decades old fight to reform science publishing, and we would all be better off if we heeded his advice.


Posted in open access, publishing, science | Comments closed

Sympathy for the Devil?

My Facebook feed is awash with people standing up for Tim Hunt: “The witch hunt against Tim Hunt is unbearable and disgraceful”, “This is how stupidity turns into big damage. Bad bad bad”, “Regarding the Tim Hunt hysteria”, and so on. Each of these posts has prompted a debate between people who think a social media mob has unfairly brought a good man down, and people like me who think that the response has been both measured and appropriate.

I happened to met Tim Hunt earlier this year at a meeting of young Indian investigators held in Kashmir. We both were invited as external “advisors” brought in to provide wisdom to scientists beginning their independent careers. While his “How to win a Nobel Prize” keynote had a bit more than the usual amount of narcissism, he was in every other way the warm, generous and affable person that his defenders of the last week have said he is. I will confess I kind of liked the guy.

But it is not my personal brush with Hunt that has had me thinking about this meeting the past few days. Rather it is a session towards the end of the meeting held to allow women to discuss the challenges they have faced building their scientific careers in India. During this session (in which I was seated next to Hunt) several brave young women stood up in front of a room of senior Indian and international scientists and recounted the specific ways in which their careers have been held back because of their gender.

The stories they told were horrible, and it was clear from the reaction of women in the room that these were not isolated incidents. If any of the scientists in positions of power in the room (including Hunt) were not already aware of the harassment many women in science face, and the myriad obstacles that can prevent them from achieving a high level of success, there is no way that could have emerged not understanding.

When I am thinking about what happened here, I am not thinking about how Twitter hordes brought down a good man because he had a bad day. I am instead thinking about what it says to the women in that room in Kashmir that this leading man of science – who it was clear everybody at the meeting revered – had listened to their stories and absorbed nothing. It is unconscionable that, barely a month after listening to a women moved to tears as she recounted a sexual assault from a senior colleague and how hard it was for her to regain her career, Hunt would choose to mock women in science as teary love interests.

Hunt’s words, and even more so his response to being called out for them, suggest that he does not understand the damage his words caused. I will take him at his word that he did not mean to cause harm. But the fact that he did not realize that those words would cause harm is worse even than the words themselves. That a person as smart as Hunt could go his entire career without realizing that a Nobel Prizewinner deriding women – even in a joking way – is bad just serves to show how far we have to go.

So, you’ll have to forgive me for recoiling when people ask me to measure my words based on the effect they will have on Hunt. I understand all too well the effects that criticism can have on people. But silence also has its consequences. And we see around us the consequences of decades of silence and inaction on sexism in science. If the price of standing up to that history is that Tim Hunt has to weather a few bad weeks, well so be it.

Posted in science, women in science | Comments closed

Elsevier admits they’re a major obstacle for women scientists in the developing world

I just received the following announcement from Elsevier:

Nominations opened today for the Elsevier Foundation Awards for Early-Career Women Scientists in the Developing World, a high-profile honor for scientific and career achievements by women from developing countries in five regions: Latin America and theCaribbean; the Arab region; Sub-Saharan Africa; Central and South Asia; and East and South-East Asia and the Pacific. In 2016 the awards will be in the biological sciences, covering agriculture, biology, and medicine. Nominations will be accepted through September 1, 2015.

Sounds great. But listen to what they get.

The five winners will each receive a cash prize of US$5,000 and all-expenses paid attendance at the AAAS meeting. The winners will also receive one-year access to Elsevier’s ScienceDirect and Scopus.

Could there be a more obvious admission that Elsevier’s own policies – indeed their very existence – is a major obstacle to the progress of women scientists in the developing world? How can anyone write this and not have their head explode?

Posted in Uncategorized | Comments closed

Pachter’s P-value Prize’s Post-Publication Peer-review Paradigm

Several weeks ago my Berkeley colleague Lior Pachter posted a challenge on his blog offering a prize for computing a p-value for a claim made in a 2004 Nature paper. While cheeky in its formulation, Pachter had an important point – he believed that a claim from this paper was based on faulty reasoning, and the p-value prize was a way of highlighting its deficiencies.

Although you might not expect the statistics behind a largely-forgotten claim from an 11 year old paper to attract significant attention, Pachter’s post has set of a remarkable discussion, with some 130 comments as of this writing, making it an incredibly interesting experiment in post-publication peer review. If you have time, you should read the post and the comments. They are many things, but above all they are educational – I learned more about how to analyze this kind of data, and about how people think about this kind of data, here than I have anywhere else.

And, as someone who believes that all peer review should be done post-publication, there’s also a lot we can learn from what’s happening on Pachter’s blog.

Pre vs Post Publication Peer Review

I would love to see the original reviews of this paper from Nature (maybe Manolis or Eric can post them), but it’s pretty clear that the 2 or 3 people who reviewed the paper either didn’t scrutinize the claim that is the subject of Pachter’s post, or they failed to recognize its flaws. In either case, the fact that such a claim got published in such a supposedly high-quality journal highlights one of the biggest lies in contemporary science: that pre-publication peer review serves to defend us from the publication of bad data, poor reasoning and incorrect statements.

After all, it’s not like this is an isolated example. One of the reasons that this post generated so much activity was that it touched a raw nerve among people in the evolutionary biology community who see this kind of thing – poor reasoning leading to exaggerated or incorrect claims – routinely in the scientific literature, including (or especially) in the journals that supposedly represent the best of the best in contemporary science (Science, for example, has had a string of high-profile papers that turned out to be completely bogus in recent years – c.f. arsenic DNA).

When discussing these failures, it’s common to blame the reviewers and editors. But they are far less the fault of the people involved than they are an intrinsic problem with pre-publication. Pre-publication review is carried out under severe time pressure by whomever the editors managed to get to agree to review the paper – and this is rarely the people who are most interested in the paper or the most-qualified to review it. Furthermore, journals like Nature, while surely interested in the accuracy of the science they publish, also ask reviewers to assess its significance, something that at best distracts from assessing the rigor of a work, and often is in conflict with it. Most reviewers take their job very seriously, bit it simply impossible for 2 or 3 somewhat randomly chosen people who read a paper at a fixed point in time and think about it for a few hours to identify and correct all of its flaws.

However – and this is the crux of the matter for me – despite the fact that pre-publication peer review simply can not live up to the task it is assigned, we pretend that it does. We not only promulgate the lie to the press and public that “peer reviewed” means “accurate and reliable”, we act like it is true ourselves. Despite the fact that an important claim in this paper is – as the discussion on the blog has pointed out – clearly wrong, there is no effective way to make this known to readers of the paper, who are unlikely to stumble across Pachter’s blog while reading Nature (although I posted a link to the discussion on PubMed Commons, which people will see if they find the paper when searching in PubMed). Worse, even though the analyses presented on the blog call into question one of the headline claims that got the paper into Nature in the first place, the paper will remain a Nature paper forever – its significance on the authors CVs unaffected by this reanalysis.

Imagine if there had been a more robust system for and tradition of post-publication peer review at the time this paper was published. Many people (including one of my graduate students) saw the flaws in this analysis immediately, and sent comments to Nature – the only visible form of post-publication review at the time. But they weren’t published, and concerns about this analysis would not be resurfaced for over a decade.

The comments on the blog are not trivial to digest. There are many threads, and the comments include those that are thorough and insightful with others that are jejune and puerile. But if you read even part of the thread you come away with a far deeper understanding of the paper, what it found and what aspects of it are right and wrong than you get from the paper itself. THIS is what peer review should look like – people who have chosen to read a paper spending time not only to record their impressions once, but to discuss it with a collection of equally interested colleagues to try and arrive at a better understanding of the truth.

The system is far from perfect. but from now on anytime I’m asked what I mean by post-publication peer review, I’ll point them to Lior’s blog.

One important question is why doesn’t this happen more often? A lot of people had clearly formed strong opinions about the Lander and Kellis paper long before Lior’s post went up. But they hadn’t shared them. Does someone have to write a pointed blog post every time they want to inspire its results to be reexamined by the community?

The problem is, obviously, that we simply don’t have a culture of doing this kind of thing. We all read papers all the time, but rarely share our thoughts with anyone outside of our immediate scientific world. Part of this is technological – there really isn’t a simple system tied to the literature on which we can all post comments on papers that we have read with the hope that someone else will see them. PubMed Commons is trying to do this, but not everyone has access. And other than they the systems are just not that good yet. But this will change. The bigger challenge is getting people to use it once good technology for post-publication peer review.

Developing a culture of post publication peer review

The biggest challenge is that this kind of reanalysis of published work just isn’t done – there simply is not a culture of post-publication peer review. We lack any incentives to push people to review papers when they read them and have opinions that they feel are worth sharing. Indeed, we have a variety of counterincentives. A lot of people ask me if Lior is nuts for criticizing other people’s work so publicly. To many scientists this “just isn’t done”. But the question we should be asking is not “Why does Lior do this?” but rather “Why don’t we all?”.

When we read a paper and recognize something bad or good about it, we should look at it as a duty to share it with our colleagues. This is what science is all about. Oddly, we feel responsible enough for the integrity of the scientific literature that we are willing to review papers that often do not interest us and which we would not have otherwise read, yet we don’t feel that way about the more important process of thinking about papers after they are published. Somehow we have to transfer this sense of responsibility from pre- to post- publication review.

An important aspect of this is credit. A good review is a creative intellectual work and should be treated as such. If people got some kind of credit for post-publication reviews, more people would be inclined to do them. There are lots of ideas out there for how to create currencies for comment, but I don’t really think this is something that can be easily engineered – it’s going to have to evolve organically as (I hope) more people engage in this kind of commentary. But it is worth noting that Lior has, arguably, achieved more notice for his blog, which is primarily a series of post-publication reviews, than he has for his science. Obviously this is not immediately convertible classical academic credit, but establishing a widespread reputation for the specific kind of intellectualism manifested on his blog, can not but help Lior’s academic standing. I hope that his blog inspires people to do the same.

Of course not everybody is a fan of Lior’s blog. Several people who I deeply respect have complained that his posts are too personal, and that they inspire a kind of mob mentality in comments in which the scientists whose work he writes about become targets. I don’t agree with the first concern, but do think there’s something to the second.

So long as we personalize our scientific achievements, attacks on them are going to feel personal. I know that every time I receive a negative review of a paper or grant, I feel like it is a personal attack. Of course I know that this generally isn’t true, and I subscribe to belief that the greatest respect you can show another scientist is to tell them when you think they’ve made a mistake or done something stupid. But, nonetheless, negative feedback still feels personal. And it inspires in most of us an instinctive desire to defend our work – and therefore our selves – from these “attacks”. I think the reason people feel like Lior’s blogs are attacks is that they put themselves into the shoes of the authors he is criticizing and feel attacked. But I think this is something we have to get over as scientists. If the critique is wrong, than by all means we should defend ourselves, but conversely we should be able to admit when we were wrong, to have a good discussion about what to do next, and move on, all the wiser for it.

However, as much as I would like us all to be thick skinned scholars about to take it and dish it out, reality is that this is not the case. Even when the comments are civil, I can see how having a few dozen people shredding your work publicly could make even the most thick skinned scientist feel like shit. And if the authors of the paper had not been famous, tenured scientist at MIT, the fear of negative ramifications from such a discussion could be terrifying. I don’t think this concern should lead to people feeling reluctant to jump into scientific discussions – even when they are critical of a particular work – but I do think we should exercise extreme care in how we say things. And rule #1 has to be to restrict comments to the science and not the authors. In this regard, I was probably one of the worse offenders in this case – jumping from a criticism of the analysis to a criticism of the authors’ response to the critique. I know them both personally and felt they would know my comments were in the spirit of advancing the conversation, but that’s not a good excuse. I will be very careful not to do that in the future under any circumstances.

Posted in Uncategorized | Comments closed