If you ask scientists to list words they are most afraid to hear associated with their work, I suspect “retraction” would rank high on the list. Retraction is a kind of death sentence, applied only when papers contain serious methodological errors or were tainted by fraud.
So the recent retraction of a PLoS Pathogens paper linking the virus XMRV to prostate cancer, following a new PLoS ONE paper that demonstrated that the original results were due to contamination, caught many (including the authors of the original paper, many of whom were involved in the followup study) off guard. Martin Enserink at ScienceNOW and Retraction Watch have excellent posts with details on the story.
Before offering my thoughts on this, I want to state at the outset that I have more than an passing interest in the story. I was one of the co-founders of PLoS, am a member of its Board of Directors, and continue to play an active role in its activities. I am also worked closely with the senior author on the original paper – Joe DeRisi – for three years while we were in Pat Brown’s lab at Stanford, and he remains a good friend. He is not only one of the most creative people I know, he is one of the best, and most careful, experimentalists I have ever met.
Putting aside the question of retraction for a moment, this is exactly how science is supposed to work. Several very good scientists found an intriguing and potentially important result and published a paper on it. Subsequent efforts failed to confirm their initial result. Rather than digging in their heals and defending their initial study – as many scientists do – the original authors accepted the newer results, and went to great lengths to figure out what had gone wrong. Their new paper is a model of detective work, and a cautionary tale about the challenges of working with clinical samples and viruses that everyone should read.
So it is now pretty clear that the major conclusion of the original paper – the association between XMRV and prostate cancer – is wrong. Obviously, people working in the field and anyone interested in the prostate cancer and chronic fatigue syndrome (the subject of a subsequent paper) who come upon the 2006 PLoS Biology paper need to know that subsequent studies have shown that the samples were contaminated and the conclusions are no longer accepted by authors. The question is how to do this.
Unfortunately, in the current world of scientific publishing, there aren’t a lot of ways to do this, and the editors at PLoS Pathogens chose to retract the paper. This retraction was accompanied by an editorial from PLoS Pathogens editor Kasturi Haldar and PLoS Medicine editor Ginny Barbour on the role of retractions in correcting the literature. I don’t agree with the decision to retract this paper, but it is worth understanding their logic:
There is much misunderstanding about retractions. Authors and editors have been notoriously unwilling to use them, for the perceived shame that they bring upon authors, editors, and journals. Journalists regularly note the fact that retractions are increasing and ask whether the scientific literature is thus becoming less reliable. Websites such as Retraction Watch list and dissect retractions – an extra exposure at what is already a difficult time for authors and editors. In addition there is much confusion about how to effect retractions practically. In an effort to bring some clarity to this issue in 2009 the Committee on Publication Ethics of which PLOS Pathogens is a member and one of us (VB) is currently Chair, issued guidelines on retractions, which explicitly state that retractions are appropriate when findings are unreliable, either as a result of misconduct (e.g. data fabrication) or honest error.
In essence, they are trying to expand the definition of retraction away from its common usage as a way to indicate misconduct to include all cases in which the findings of a paper should now be judged unreliable. They go on to explain how they will wield this redefined tool in the future:
We firmly believe that acceleration also requires being open about correcting the literature as needed so that research can be built on a solid foundation. Hence as editors and as a publisher we encourage the publication of studies that replicate or refute work we have previously published. We work with authors (through communication with the corresponding author) to publish corrections if we find parts of articles to be inaccurate. If a paper’s major conclusions are shown to be wrong we will retract the paper. By doing so, and by being open about our motives, we hope to clarify once and for all that there is no shame in correcting the literature. Despite the best of efforts, errors occur and their timely and effective remedy should be considered the mark of responsible authors, editors and publishers.
No matter what Haldar and Barbour want, they can not erase the stigma of retraction by fiat. When a work means something in the community, it doesn’t matter what a dictionary or some unknown committee says. Retractions are viewed by scientists and the public as marks of shame. Imagine how the students and postdocs who carried out the work described in the 2006 paper. They did nothing wrong. Indeed several participated in the effort to figure out what went wrong – going above and beyond what most people would have done. And the reward for their effort is to have “RETRACTED” show up every time someone searches for them on PubMed? This is not the right solution.
I understand the instinct to want a way to correct the literature, especially in cases like this that have attracted a lot of public attention. But isn’t science ultimately all about correcting the literature? It’s not a singular act to look back at previous work and find things that could have been done better, and even things that are outright wrong. This is a large part of what we do. If you look back at the literature from five year, ten years or longer ago, you will find myriad papers that, given what we know now, have findings that are unreliable and conclusions that are now clearly wrong. Are we going to go back and retract all of these papers? Of course not. It’s insane.
As easy as it might be to dismiss this incident as an isolated example of editorial overreach, this is really just the latest manifestation 0f a broader problem that plagues scientific publication and poisons the scientific process: the reification of the citation. Going back and correcting published papers only makes sense if you view the scientific literature as an isolated collection of discrete, singular events – publications – commemorated with a sacred merk – the citation. If papers are supposed to stand forever as vessels of truth, then of course you have to purge those that are shown to be wrong – both to protect people from untruths, and to defend the sanctity of the citation.
Researchers dread retractions for the same reason they will sell their souls to publish in a high impact journals – because the currency of academic success is not achievement – it is citations. Sure, they are not unlinked. But where they come into conflict, citation almost always win. A Nature paper is a Nature paper forever – even if the results turn out to be insignificant, or, as is often the case, outright wrong. The only thing that can change that is a retraction.
Thus, in some ways, the proposal by Haldar and Barbour is not reactionary, as many have suggested – it is deeply subversive. By exposing all citations – not just those achieved dishonestly – to the threat of retraction it strips the citation of one of its most valuable properties – permanence. But despite my love for all things subversive, I do not think this is the right solution, as it ultimately reinforces the idea of the scientific literature as a collection of discrete events.
An obvious solution to all of these problems follows from thinking about the literature as what it really is: a historical record of ideas, discoveries and, yes, mistakes – whose value comes not from static individual pieces, but from ways in which they are connected and change over time. It is often said that science is “self-correcting”, recognizing that our views of the value and validity of previously published work inevitably changes over time as we use, build on and expand upon the work of our colleagues – something perfectly demonstrated by the XMRV story. What we need to do is not to isolate and protect ourselves from the dynamic nature of science, but to embrace it.
It’s disheartening that in this day of electronic publications and databases that the editors felt that the only way they could ensure that people reading the 2006 XMRV paper would look at it in the context of newer findings was to retract the paper. If we had a way of capturing how new methods, data and ideas were changing our view of earlier work, they would not have needed to even consider something as dire or as clumsy as a retraction. And there is no reason we can’t do this – we have the technical means to switch from one-time assessments of a paper to a system of ongoing evaluation and reevaluation whose output changes as our understanding grows. The only thing stopping us is the continued reification of the citation in science, and our unwillingness to discard it.
UPDATE: I want to emphasize that my goal here was not to take the editors’ to task. I don’t completely support what they did, but they were trying to deal a real, immediate problem – people acting on conclusions from a paper whose results nobody now believes to be true. What I was primarily lamenting was the fact that our system does not provide them with any other tool than retraction.