I have watched the escalating hysteria about recent NIH-funded experiments to humanize H5N1 influenze (aka bird flu) with a mixture of amusement, horror and confusion. The amusement is born of the predictable hysteria that always infuses media coverage of research on lethal virus. The horror comes from the largely unchallenged move to censor publication of the experiments. The confusion comes from my own somewhat mixed feelings about the value of the experiments and their public health threat. I should add that this is an issue I know a lot about, as I did my graduate work on the evolution of human influenza viruses.
To quickly summarize the issues at hand, we all remember the brief panic that ensued a few years back when a largish number of people in Asia died from H5N1 infections, leading to runs on face masks and stocks of ineffective anti-influenza drugs. H5N1’s moment in the sun passed once it was realized that, while the virus was very lethal, it had an extremely low rate of human-to-human transmission, meaning that a global epidemic was not imminent.
Nonetheless, the high rate of mortality in otherwise healthy individual brought back fears of the 1918 H1N1 pandemic which is estimated to have killed as many as 40 million people worldwide (including my great grandfather). And, while the currently dominant H5N1 is a crappy human virus, we all know that flu evolves very rapidly and thus the prospect of a highly transmissible H5N1 emerging in the future warrants serious concern.
With this backdrop, researchers at the University of Wisconsin and at the Erasmus Medical Center in the Netherlands independently carried out experiments in which they selected for variants of H5N1 that could be transmitted through the air from ferret to ferret (which are the choice model species for studying influenza in humans). In the work at Erasmus, carried out by Ron Fouchier and colleges, a total of five changes in two of the virus’s eight RNA chromosomes were sufficient to create what appears from its laboratory properties at least to be a potentially devastating human pathogen.
Like any scientist, Fouchier prepared a manuscript describing what were sure to be high profile results, and submitted them to the journal Science. The Wisconsin group (about whose results we know a lot less) submitted their paper to Nature. While under review, both papers were apparently forwarded to the US National Science Advisory Board for Biosecurity, a panel that develops “policies addressing life sciences research that yield information or technologies with the potential to be misused to threaten public health or national security”.
Earlier this week the NSABB issues its findings:
Following its review, the NSABB decided to recommend that HHS ask the authors of the reports and the editors of the journals that were considering publishing the reports to make changes in the manuscripts. Due to the importance of the findings to the public health and research communities, the NSABB recommended that the general conclusions highlighting the novel outcome be published, but that the manuscripts not include the methodological and other details that could enable replication of the experiments by those who would seek to do harm.
Although the recommendations are technically non-binding, Science has agreed to implement them with regards to the Fouchier paper.
My first instinct was that this represents a typical overreaction to scientific research on a dangerous pathogen, and that the small marginal reduction in the risk of misuse this censorship would accomplish does not justify the potentially stifling impact the spectre of censorship would have on future research on human pathogens and on any potential “dual use” technology.
After thinking about this for a while, my overall views are unchanged. But I will admit that the issues here are more complex than I would like them to be.
Issue 1: Is the risk real? Although it is impossible to know how this virus would affect humans, its behavior in ferrets establishes a non-trivial possibility that the evolved Rotterdam virus could cause a lethal global pandemic. Assuming that it is, it remains unclear whether the bigger risk comes from the physical virus Fouchier created escaping or being intentionally released from the lab, or from someone with bad intentions recreating the virus from published information and releasing it.
It’s of course impossible to know for sure, but I am much more afraid of the actual viruses sitting in Rotterdam. It is widely believed that the H1N1 viruses that have been circulating since 1978 were accidentally released from a Russian biological warfare research station. If the virus is as bad as is feared, a single fuckup in its handling could have disastrous consequences. And if I were a terrorist hellbent on obtaining the virus, steal it would seem like a far easier course than recreating it from a Science paper.
I’m also skeptical that the terrorists we fear getting their hands on this virus would think of flu as their biological weapon of choice, because, once released, it would be virtually impossible to contain its spread across the globe. And its likely that people in the developed countries they would target would be the least adversely affected, as they are best prepared to treat people who are infected and to develop and distribute vaccines and anti-viral medications.
Furthermore, if someone wanted to develop such a virus as a weapon, and had the skills to actually do it, they would also presumably have enough knowledge to have independently concluded that they could create their own lethal virus by passaging it through a secondary host like pigs or ferrets. This is hardly a secret – it’s the most widely accepted model for the natural movement of viruses from birds to man. And even if they hadn’t thought of it, enough’s been said in the press already to allow them to recreate these experiments without reading about them in a journal.
But, all that aside, I acknowledge that publishing details of these experiments entails some degree of risk. Which brings me to:
Issue 2: Is the knowledge produced by these experiments valuable? Here’s where things get complicated for me, because truth be told, I think these were really stupid experiments that have little practical value. The ostensible reason for carrying out and publishing these experiments is that they tell us important things about what a human transmissible H5N1 virus will look like, allowing us to better detect and prepare for a future pandemic.
I think this is very wrongheaded, and exhibits an almost willful ignorance of the ways that viruses in general, and flu in particular, evolve. RNA viruses like flu have very high mutation rates, and sample an astonishing diversity of variant sequences even in the course of infecting a single individual. The best demonstration of this is the rapidity with which drug-resistant strains emerge whenever any of the available anti-influenza drugs are used. It is because of the rapid emergence of resistance that use of these drugs is largely restricted to managing outbreaks in places with highly susceptible individuals, like nursing homes.
Sequence analysis in the last few years suggests that the virus can readily sample many alternative paths to avoid immune surveillance (the chief selective pressure it experiences during any outbreak). And it is highly likely that there are many paths the virus could follow to make it infectious in humans. Thus the fact that Fouchier has identified one such path, likely tells us very little about what a naturally occurring human H5N1 virus might look like. Indeed, the ease with which multiple groups selected strains with the right properties to spread among humans strongly suggests that there are multiple paths to achieve this end. If we assume there is only one way to have created such a virus – as would be required to give this work great practical significance – it is highly unlikely that it would have emerged in a small ferret colony, even with the viruses high mutation rate. So I think Fouchier’s own results suggest that there are many ways in which H5N1 could acquire the ability to infect the human population. That he found one of them is interesting, but not all that useful.
This is not to say that I think the experiment is completely flawed. One of the guiding principles of research should be doing experiments that could potentially yield surprising results – and this could have been such an experiment (and it may actually have been – I can’t tell without reading the papers). But I am sympathetic to the notion that, given the risks of the experiments themselves, if not the data they produce, perhaps these experiments just should not have been done. There are lots of experiments to be done in the world, and I don’t think it’s the worst thing in the world to think about the risks in choosing which ones to pursue. That said, I am unwilling to say what many of my colleagues have argued – that these are unambiguously worthless experiments whose pursuit was completely reckless.
And, in any case, it doesn’t matter, since these experiments were already done. That, we can not take back. So we return to the question of whether the results should be published. The more I think about this, the less I think this particular experiment and virus is really the issue. If we ignore the larger issues, I conclude that the risks from publishing the full details of the experiment are small, but so are the gains. And I’m not downplaying the risks of the virus here, just the marginal risks of publishing a paper describing it. I simply think it’s very unlikely that in this case information in a paper would be the key missing ingredient to enable to deployment of an H5N1 weapon. But I’m also not going to argue that our ability to fight off a future H5N1 pandemic would be hopeless crippled by suppressing these results.
But this issue is not just about this paper – indeed my feeling that publication is not really all that significant makes the larger issues involved more significant. The spectre of censorship is a huge risk to science – it has the real potential to discourage people from working on dangerous pathogens – precisely the kinds of things we want people TO work on. And it can similarly discourage people from pursuing risky lines of research if they fear they may take them to censorable subjects.
Furthermore, if there is really something to fear in science, it is not experiments like the ones Fouchier carried out. Rather it is enabling basic technologies that are the most dangerous. Terrorists make massive use of the internet, for example. We probably have a lot more to fear from terrorists hanging out on Twitter than we do from those scouring the methods section of Science (if they can even find the relevant information in the 223 pages of supplementary material). Should the development of the internet have been stopped if this future threat had been foreseen? And, if details of Fouchier’s experiments were used by terrorist molecular biologists to remake his virus, they would almost certainly make use of DNA synthesis technology. Should we have banned its publication and dissemination? I don’t think these are idle questions, because the logic that led to censoring the Fouchier paper is the same.
But the thing that really really annoys me about this whole debate, is the disproportionate attention paid to mitigating the risks of these experiments compared to the far greater risks that surround us. It seems insane for a government to spend so much time wringing its hands about publishing the results of a few potentially dangerous experiments, when it does things every day that entail a far, far greater risk to its peoples’ health and well being. For example, we continue to ship massive amounts of arms to sketchy “allies” across the globe, many of which are destined to end up in the hands of terrorists, who would have a far easier time using them against us than they would any H5N1 virus. And we have done little to address the sorry state of our public health infrastructure – something that is an indispensable part of our response to major pathogen outbreaks, whether of natural origin or otherwise. And let’s not even talk about our stubborn refusal to deal with global warming…
I’m not saying that the fact we do nothing about ongoing major problems means we should ignore smaller or less likely ones. But I remain uneasy that the quick censorship trigger being pulled here with the easy acquiescence of most of the scientific community augurs future restrictions on science that will do real harm to one of the few things with the potential to protect us from deadly viruses and the other real and imagined perils of our future.