FDA vs. 23andMe: How do we want genetic testing to be regulated?

Yesterday the US Food and Drug Administration sent a letter to the human genetic testing company 23andMe giving them 15 days to respond to a series of concerns about their products and the way they are marketed or risk regulatory intervention. This action has set off a lot of commentary/debate about the current and future value of personal genomics, whether these tests should be available direct to consumers or require the participation of a doctor, and what role the government should play in regulating them.

I am a member of the Scientific Advisory Board for 23andme, but I am writing here in my individual capacity as a geneticist who wants to see human genetic data used widely but wisely (although I obviously have an interest in the success of 23andme as a company – so I can not claim to be unbiased).

I see a wide range of opinions from my friends on this matter – ranging from “F**k the FDA – who are they to tell me what I can and can not learn about my DNA” to “Personalized genomics is snake oil and it’s great that the FDA is stepping in to regulate it”. I fall somewhere in the middle – I think there is great promise in personalized genetics, but at the moment it is largely unrealized. Looking at your own DNA is really interesting, but it only rarely provides actionable new information. I don’t think the FDA should restrict consumer access to their genotype or DNA sequence, but I do think the government has an important role to play in ensuring that consumers get accurate information and that the data are not oversold in the name of selling products.

As people try to decide what kinds of tests and information should be available and how the government should regulate them, I think it’s useful to ask a series of questions.

1) Should a person be able to have their DNA sequenced and get the data?

Putting aside any questions about how useful this information is right now and how it is marketed, do you think companies should be able to offer a service where consumers send in a spit or blood sample and a few hundred dollars and get their genome sequenced in return? (23andme currently provides SNP genotyping, not whole genome sequencing, but we’re very close to the point where human genome sequencing is cheap and reliable enough to make this possible.)

I think the answer is obviously yes. I can’t see any good argument for why we should prevent people who to from obtaining their own DNA sequence.

Which leads to:

2) Should a person who has had their genome sequenced be able to access scientific literature relevant to their genome? 

Again, putting aside questions about the accuracy or utility of this information, there is a lot of published scientific literature that is potentially relevant to people with a particular genotype (including genome-wide association studies as well as a lot of classical human genetic literature and other functional studies). Assuming someone has their own genome sequence, it would be hard to argue they shouldn’t have access to information that would allow them to understand what their genome means.

Which leads to:

3) Is there a role for third parties in helping people interpret their genome sequence? 

The problem with the previous question is that it would be next to impossible for someone to actually interpret their genome simply by perusing the scientific literature (and I’m not even going to get started on the fact that much of this literature is behind paywalls). Even trained human geneticists wouldn’t do that. They’d go to some website – OMIM, DECIPHER, etc… – and use various automated tools to extract what is known about their genotype.

But few people have the technical savvy to be able to analyze their own genome in this way. So, assuming there is interest, there is a great niche for third parties to step in an provide services to people to help them interpret their own DNA. Is this a bad thing? Again, I don’t see how it is – assuming that these third parties provide accurate information (more on this below).

Should this third party be a doctor, as some (mostly doctors) are arguing? There are certainly doctors out there who have a great grasp of human genetics. But there aren’t a lot of them. And even the doctors who do know the world of human genetics inside and out aren’t in a position to help people navigate every nook and cranny of their genome. This is a job for software, not for people.

If you accept points 1,2 and 3 above – which to me seem inarguable – then you accept the right for companies like 23andme to exist. You might not think they provide a valuable service. You might not think they do a good job at providing these services. But you can’t argue – as many are now doing – that direct-to-consumer genetic testing companies should be shut down.

Should direct-to-consumer genetic testing companies be regulated? 

I think this is also a no-brainer. Obviously they should be regulated – and fairly tightly so in my opinion. Few consumers have the capacity to judge on their own whether the genetic testing services provided by a company are accurate and whether interpretive information provided by third parties is valid. It is vital that the FDA protect consumers in two ways: 1) by validating the tests and the companies that provide them, and 2) by monitoring genetic information that is provided by consumers – especially if it is being used to market tests or other products. The former seems relatively easy – validating genotyping and sequencing is well-trodden turf. The latter is a bit more complicated.

If genetics were simple and our understanding of it were complete, companies could provide accurate reports that say “based on your genotype, your age and personal history, you have a 7.42% chance of developing ovarian cancer in the next 10 years”. However, we are far, far, far away from this. We have an incomplete catalog of human genetic variation; known genetic variation can explain only a small fraction of the heritable component of most phenotypes of interest; we have a poor understanding of how different genetic variants interact to affect disease risk or other phenotypes; and we have essentially no capacity to incorporate environmental effects into predictive models. In many cases current, incomplete, data may point to someone having an elevated risk of some disease, when they really have a lower than average risk. And, to top it all off, there are very few cases where knowing your risk status or other phenotype points to genotype-specific actions (with the BRCA status referred to in the FDA letter a notable exception).

The data are, at this point in time, very very messy. I don’t think anyone disagrees with that. The question is what to do about that. One the one side you have people who argue that the data are so messy, of so little practical value, and so prone to misinterpretation by a population poorly trained in modern genetics that we should not allow the information to be disseminated. I am not in this camp. But I do think we have to figure out a way for companies that provide this kind of information to be effectively regulated. The challenge is to come up with a regulatory framework that recognizes the fact that this information is – at least for now – intrinsically fuzzy.

The FDA wants to classify genetic tests like those offered by 23andme as medical devices, and to apply the appropriately strict criteria used for medical devices to genetic tests. But the problem with this is that contemporary genetic tests will almost certainly fail to meet these criteria, and I don’t see who benefits from that scenario. Genetic tests are simply not – at least not yet – medical devices in any meaningful sense of the word. They are far closer to family history than to an accurate diagnostic. The FDA and companies like 23andme need to come up with standards for accurately and honestly describing the current state of knowledge for genotype-phenotype linkages and their application to individual genotypes. They need to establish what generic statements can and can not be used to market genetic tests so that people don’t purchase them with unrealistic expectations about the kinds of information they will provide. Let’s hope this flareup between the FDA and 23andme is the spark that finally makes this happen.

 

Posted in genetics | Tagged , | 62 Responses

PubMed Commons: Post publication peer review goes mainstream

I have written a lot about how I think the biggest problem in science communication today is the disproportionate value we place on where papers are published when assessing the validity and import of a work of science, and the contribution of its authors. And I have argued that the best way to change this is to develop a robust system of post publication peer review (PPPR) , in which works are assessed continuously after they are published so that flaws can be identified and corrected and so that the most credit is reserved for works that withstand the test of time.

There have been LOTS of efforts to get post-publication peer review off the ground – usually in the form of comments on a journal’s website – but these have, with few exceptions, failed to generate sustained use. There are lots of possible reasons for this – from poor implementation, to lack of interest on the part of potential discussants. However, I’ve always felt the biggest flaw was that these were on journal websites – that you had to think about where the work was published, and whether they had a commenting system, and whether you had an account, etc…

What we’ve always needed was a central place where you know you can always go to record comments on a paper you are reading, and, conversely, where you can get all of the comments other scientists have on a paper you’re reading or are interested in. There have been a couple of services that have tried to create such a system – cf PubPeer, which lets you comment on any paper in PubMed – but they have been slow to gain traction in the community.

The obvious place to build such a commenting/post publication review system has always been directly in PubMed – it has everything and everyone already uses it. This is why I am excited – and cautiously optimistic – about a new project called PubMed Commons that will allow registered users (for now primarily NIH grantees) to post comments on any paper in PubMed, which will then appear alongside the paper when it is received in a search.

Here is how PubMed Commons describes itself:

PubMed Commons is a system that enables researchers to share their opinions about scientific publications. Researchers can comment on any publication indexed by PubMed, and read the comments of others.

PubMed Commons is a forum for open and constructive criticism and discussion of scientific issues. It will thrive with high quality interchange from the scientific community.

The system is still pretty threadbare – it only allows simply commenting, and not, for example, rating of the work – but I’ve used it and it is easy to get in, comment and get out. A lot more info on the project can be found here.

This is a great opportunity for us to make PPPR real. But it’s only going to work if people participate. So, if you’re an NIH grantee, and you want to see science communication improve, make a commitment to comment in a paper you’ve read at least once a week, and let’s make this thing work!!

Posted in open access, public access, publishing | 17 Responses

GMOs and pediatric cancer rates #GMOFAQ

There’s a post being highlighted by anti-GMO activists on Twitter that claims that cancer is now the leading cause of death among children in the US, that the rates of pediatric cancer are increasing and that this is because of GMOs. This is another egregious example of the willingness of anti-GMO campaigners to lie to the public in order to scare them and promote their agenda.

A simple look at data exposes the absurdity of their claims:

1) Cancer is not the leading cause of death among children in the United States

The Centers for Disease Control publishes annual statistics on the leading causes of death in the US broken down by age. These data show that malignant neoplasms are a serious problem – killing over 1,000 children under the age of 14 every year – making it the leading cause of disease-related death in children. But accidents remain the major cause of death by far.

One other thing to note from this table is the top 5 in any age group. This was not always the case, and is almost entirely the result of vaccination, another evil of modern science often highlighted by the same people who oppose GMOs.

2) Childhood cancer rates are not increasing

Another claim cited by the anti-GMO crowd is that childhood cancer rates are increasing at an “alarming rate”. Again, data says otherwise. Here is a report from the National Cancer looking at rates of childhood cancer from 1988 to 2008 that shows that they are virtually unchanged.

Screen Shot 2013-10-12 at 11.50.56 AM

3) There is no evidence that GMOs cause childhood cancer

If GMOs caused childhood cancer, you would expect there to be some difference in the rate of childhood cancer in the US after the introduction of GMOs into the US food supply in 1995. However the rate of childhood cancer has remained unchanged from its pre-1995 levels.

Childhood cancer is a horrible, horrible thing. We should do everything in our power to prevent and better treat it so that cancer, like infectious disease, disappears from statistics on childhood mortality. But it doesn’t do anyone any good to misrepresent the statistics in the name of a political agenda. So please anti-GMO campaigners, stop making stuff up, and stop using false statistics to try to scare people.

Posted in GMO | 4 Responses

I confess, I wrote the Arsenic DNA paper to expose flaws in peer-review at subscription based journals

In 2011, after having read several really bad papers in the journal Science, I decided to explore just how slipshod their peer-review process is. I knew that their business depends on publishing “sexy” papers. So I created a manuscript that claimed something extraordinary - that I’d discovered a species of bacteria that uses arsenic in its DNA instead of phosphorus. But I made the science so egregiously bad that no competent peer reviewer would accept it. The approach was deeply flawed – there were poor or absent controls in every figure. I used ludicrously elaborate experiments where simple ones would have done. And I failed to include a simple, obvious experiment that would have definitively shown that arsenic was really in the bacteria’s DNA. I then submitted the paper to Science, punching up the impact the work would have on our understanding of extraterrestrials and the origins of life on Earth in the cover letter. And what do you know? They accepted it!

My sting exposed the seedy underside of “subscription-based” scholarly publishing, where some journals routinely lower their standards – in this case by sending the paper to reviewers they knew would be sympathetic - in order to pump up their impact factor and increase subscription revenue. Maybe there are journals out there who do subscription-based publishing right – but my experience should serve as a warning to people thinking about submitting their work to Science and other journals like it. 

OK – this isn’t exactly what happened. I didn’t actually write the paper. Far more frighteningly, it was a real paper that contained all of the flaws described above that was actually accepted, and ultimately published, by Science.

I am dredging the arsenic DNA story up again, because today’s Science contains a story by reporter John Bohannon describing a “sting” he conducted into the peer review practices of open access journals. He created a deeply flawed paper about molecules from lichens that inhibit the growth of cancer cells, submitted it to 304 open access journals under assumed names, and recorded what happened. Of the 255 journals that rendered decisions, 157 accepted the paper, most with no discernible sign of having actually carried out peer review. (PLOS ONE, rejected the paper, and was one of the few to flag its ethical flaws).

The story is an interesting exploration of the ways peer review is, and isn’t, implemented in today’s biomedical publishing industry. Sadly, but predictably, Science spins this as a problem with open access. Here is their press release:

Spoof Paper Reveals the “Wild West” of Open-Access Publishing

A package of news stories related to this special issue of Science includes a detailed description of a sting operation — orchestrated by contributing news correspondent John Bohannon — that exposes the dark side of open-access publishing. Bohannon explains how he created a spoof scientific report, authored by made-up researchers from institutions that don’t actually exist, and submitted it to 304 peer-reviewed, open-access journals around the world. His hoax paper claimed that a particular molecule slowed the growth of cancer cells, and it was riddled with obvious errors and contradictions. Unfortunately, despite the paper’s flaws, more open-access journals accepted it for publication (157) than rejected it (98). In fact, only 36 of the journals solicited responded with substantive comments that recognized the report’s scientificproblems. (And, according to Bohannon, 16 of those journals eventually accepted the spoof paper despite their negative reviews.) The article reveals a “Wild West” landscape that’s emerging in academic publishing, where journals and their editorial staffs aren’t necessarily who or what they claim to be. With his sting operation, Bohannon exposes some of the unscrupulous journals that are clearly not based in the countries they claim, though he also identifies some journals that seem to be doing open-access right.

Although it comes as no surprise to anyone who is bombarded every day by solicitations from new “American” journals of such-and-such seeking papers and offering editorial positions to anyone with an email account, the formal exposure of hucksters out there looking to make a quick buck off of scientists’ desires to get their work published is valuable. It is unacceptable that there are publishers – several owned by big players in the subscription publishing world – who claim that they are carrying out peer review, and charging for it, but no doing it.

But it’s nuts to construe this as a problem unique to open access publishing, if for no other reason than the study, didn’t do the control of submitting the same paper to subscription-based publishers (UPDATE: The author, Bohannon emailed to say that, while his original intention was to look at all journals, practical constraints limited him to OA journals, and that Science played no role in this decision). We obviously don’t know what subscription journals would have done with this paper, but there is every reason to believe that a large number of them would also have accepted the paper (it has many features in common with the arsenic DNA paper afterall). Like OA journals, a lot of subscription-based journals have businesses based on accepting lots of papers with little regard to their importance or even validity. When Elsevier and other big commercial publishers pitch their “big deal”, the main thing they push is the number of papers they have in their collection. And one look at many of their journals shows that they also will accept almost anything.

None of this will stop anti-open access campaigners  (hello Scholarly Kitchen) from spinning this as a repudiation for enabling fraud. But the real story is that a fair number of journals who actually carried out peer review still accepted the paper, and the lesson people should take home from this story not that open access is bad, but that peer review is a joke. If a nakedly bogus paper is able to get through journals that actually peer reviewed it, think about how many legitimate, but deeply flawed, papers must also get through. Any scientist can quickly point to dozens of papers – including, and perhaps especially, in high impact journals – that are deeply, deeply flawed – the arsenic DNA story is one of many recent examples. As you probably know there has been a lot of smoke lately about the “reproducibility” problem in biomedical science, in which people have found that a majority of published papers report facts that turn out not to be true. This all adds up to showing that peer review simply doesn’t work.

And the real problem isn’t that some fly-by-night publishers hoping to make a quick buck aren’t even doing peer review (although that is a problem). While some fringe OA publishers are playing a short con, subscription publishers are seasoned grifters playing a long con. They fleece the research community of billions of dollars every year by convincing them of something manifestly false – that their journals and their “peer review” process are an essential part of science, and that we need them to filter out the good science – and the good scientists – from the bad. Like all good grifters playing the long con, they get us to believe they are doing something good for us – something we need. While they pocket our billions, with elegant sleight of hand, then get us to ignore the fact that crappy papers routinely get into high-profile journals simply because they deal with sexy topics.

But unlike the fly by night OA publishers who steal a little bit of money, the subscription publishers’ long con has far more serious consequences. Not only do they traffic in billions rather than thousands of dollars and denying the vast majority of people on Earth access to the findings of publicly funded research, the impact and glamour they sell us to make us willing participants in their grift has serious consequences. Every time they publish because it is sexy, and not because it is right, science is distorted. It distorts research. It distorts funding. And it often distorts public policy.

To suggest – as Science (though not Bohannon) are trying to do – that the problem with scientific publishing is that open access enables internet scamming is like saying that the problem with the international finance system is that it enables Nigerian wire transfer scams.

There are deep problems with science publishing. But the way to fix this is not to curtain open access publishing. It is to fix peer review.

First, and foremost, we need to get past the antiquated idea that the singular act of publication – or publication in a particular journal – should signal for all eternity that a paper is valid, let alone important. Even when people take peer review seriously, it is still just represents the views of 2 or 3 people at a fixed point in time. To invest the judgment of these people with so much meaning is nuts. And its far worse when the process is distorted – as it so often is – by the desire to publish sexy papers, or to publish more papers, or because the wrong reviewers were selected, or because they were just too busy to do a good job. If we had, instead, a system where the review process was transparent and persisted for the useful life of a work (as I’ve written about previously), none of the flaws exposed in Bohannon’s piece would matter.

Posted in open access, science | 149 Responses

NASA paywalls first papers arising from Curiosity rover, I am setting them free

The Mars Curiosity rover has been a huge boon for NASA – tapping into the public’s fascination with space exploration and the search for life on other planets. Its landing was watched live by millions of people, and interest in the photos and videos it is collecting is so great, that NASA has had to relocate its servers to deal with the capacity.

So what does NASA do to reward this outpouring of public interest (not to mention to $2.5 billion taxpayer dollars that made it possible)? They publish the first papers to arise from the project behind a Science magazine’s paywall:

 

curiositypaywall

There’s really no excuse for this. The people in charge of the rover project clearly know that the public are intensely interested in everything they do and find. So I find it completely unfathomable that they would forgo this opportunity to connect the public directly to their science. Shame on NASA.

This whole situation is even more absurd, because US copyright law explicitly says that all works of the federal government – of which these surely must be included – are not subject to copyright. So, in the interests of helping NASA and Science Magazine comply with US law, I am making copies of these papers freely available here:

Update: Copyright

For those interested in the issue of copyright in works of the US government, please see the following:

Section 105 of US Copyright Act, which states:

Copyright protection under this title is not available for any work of the United States Government, but the United States Government is not precluded from receiving and holding copyrights transferred to it by assignment, bequest, or otherwise.

House Report 94-1476 which details the reasoning behind this provision:

The effect of section 105 is intended to place all works of the United States Government, published or unpublished, in the public domain. This means that the individual Government official or employee who wrote the work could not secure copyright in it or restrain its dissemination by the Government or anyone else, but it also means that, as far as the copyright law is concerned, the Government could not restrain the employee or official from disseminating the work if he or she chooses to do so. The use of the term “work of the United States Government” does not mean that a work falling within the definition of that term is the property of the U.S. Government.

The only ambiguity in the case of these Curiosity papers is that not all of the authors are US Government employees, and thus the work is, I am told “co-owned” by the authors. I’m not sure what effect this has on the ability of Science magazine to assert copyright in the work, since, at best, they are doing so at the behest of only a subset of the authors. The law makes it clear that its intent is to direct the US government authors to place the work in the public domain, and that any agreement they enter into to restrict access to the work is invalid. This is why I view the practice of taking works authored (and funded) by the US government and placing them behind paywalls to be illegitimate.

Update 2: JPL has now posted the articles on their site 

As of today these articles are now available to download from the JPL website. I assume this was done in response to this post and the attention it received. (They were not there on the 26th when the press releases went out – I looked. And you can see from the PDFs that they weren’t downloaded from the Science website until the 27th.) Let’s hope that in the future that all NASA papers – and indeed the results of all government funded research – are made immediately freely available.

Posted in open access, science | 119 Responses

With its HeLa genome agreement, the NIH embraces a expansive definition of familial consent in genetics

I wrote before about the controversy involving the release earlier this year of a genome sequence of the HeLa cell line, which was taken without consent from Henrietta Lacks as she lay dying of ovarian cancer in 1950s Baltimore.

Now, the NIH has announced an agreement with Lacks’ descendants to obtain their consent for access to and use of the HeLa genome (the agreement applies only to NIH funded research, but the hope is that others will agree to it as well).

I think the NIH handled this reasonably well. There’s no way to go back and consent Henrietta Lacks, so one could reasonably argue that nobody should be allowed to use HeLa cells ever or generate and use information derived from their genome. But that seems like too harsh a judgment, especially given the pride the family takes in the use of Henrietta’s cells for research.

So I think it’s entirely reasonable, in this case, to give the family the right to consent for use of these cells, and to impose whatever restrictions on the use they see fit.

However, there are some issues raised by this case and this decision that warrant further discussion.

First, exactly when, and under what conditions, should someone’s heirs be able to consent on their behalf? It sounds like there was broad consensus from the Lacks family about how to handle this. But what if there hadn’t been? Does the consent right pass down strictly to one’s legal heirs? And maybe more relevant to existing use of clinical samples, many consent documents allow people donating samples to withdraw their consent in the future. Does that right also pass down to one’s heirs?

Second, and to me more importantly, is the issue I raised previously with respect to Rebecca Skloot’s op-ed on the topic. In both her piece, and in the editorial by Francis Collins and Kathy Hudson, there is mention of the need not just to make up for the lack of original consent, but to protect the genetic privacy of the Lacks family. The notion is that, because they are so publicly associated with HeLa cells, anything that is discovered about these cells will immediately be associated with members of the family. And with the decision announced today, the NIH is explicitly giving the Lacks family the right to veto uses of HeLa cells, not because Henrietta would not have consented to the use in 1951, but because they view it as an invasion of their privacy today.

This is indeed an issue, but it is a very different one than original consent. And unlike the original consent issue – which can be argued as applying narrowly to the HeLa case – the privacy issue applies to all genomic data, whether properly consented or not. Collins and Hudson talk about “de-identified” samples in their essay, ignoring the now abundant evidence that one can almost trivially deduce the donor of a clinical sample from a small amount of DNA sequence and the use of public databases of genetic information.

Thus, in the near future, any human genetic data out there will be subject to the same risk that the Lacks family now faces. We can’t set up a panel of family members for each of the tens of thousands of samples that will soon be out there. And even if we could, I don’t think we should. There is no sensible or even workable way to require familial consent for the use of someone’s genetic material.

We believe in the absolute right of individuals to make decisions about how samples obtained from them can be used. But the very nature of inheritance and genetics means that every decision they make by necessity affects other individuals – close relatives most acutely, but by no means exclusively. Figuring out how we deal with this is one of the major practical and philosophical challenges of the age of genetic information, and even though Collins and Hudson chose to punt this issue down the road in the name of comity with the Lacks family, it is an issue we are going to grapple with very soon.

And I am disturbed that the Director of the NIH has, in effect, embraced an extreme position on this issue – that families have the right to veto uses of someone else’s DNA.

Posted in genetics, HeLa | 5 Responses

Let’s not get too excited about the new UC open access policy

It was announced today that systemwide Academic Senate representing the 10 campuses of the University of California system had passed an “open access” policy.

The policy will work like this. Before assigning copyright to publishers, all UC faculty will grant the university a non-exclusive license to make the works freely available, provide the university with a copy of the work, and select a creative commons license under which is will be made freely available in UC’s eScholarship archive.

A lot of work went into passing this, and its backers – especially UCLA’s Chris Kielty – are to be commended for the cat herding process required to get it though UC’s faculty governance process.

I’m already seeing lots of people celebrating this step as a great advance for open access. But color me skeptical. This policy has a major, major hole – an optional faculty opt-out. This is there because enough faculty wanted the right to publish their works in ways that were incompatible with the policy that the policy would not have passed without the provision.

Unfortunately, this means that the policy is completely toothless. It provides a ready means for people to make their works available – which is great. And having the default be open is great. But nobody is compelled to do it in any meaningful way – therefore it is little more than a voluntary system.

More importantly, the opt-out provides journals with a way of ensuring that works published in their journals are not subject to the policy. At UCSF and MIT and other places, many large publishers, especially in biomedicine, are requiring that authors at institutions with policies like the UC policy opt-out of the system as a condition of publishing. At MIT, these publishers include AAAS, Nature, PNAS, Elsevier and many others.

We can expect more and more publishers to demand opt-outs as the number of institutions with open/public access policies grows. In the early days of such “green” open access, publishers were pretty open about allowing authors to post manuscript versions of their papers in university archives. They were open because there was no cost to them. Nobody was going to cancel a subscription because they could get a tiny fraction of the articles in a journal for free somewhere on the internet.

However, as more universities – especially big ones like UC – move towards institutional archiving policies, an increasing fraction of the papers published in subscription journals could end up in archives – which WOULD threaten their business models. So, of course (and as I and others predicted a decade ago), subscription publishers are now doing their best to prevent these articles from becoming available.

So long as the incentives in academia push people to publish in journals of high prestige, authors are going to do whatever the journal wants with respect to voluntary policies at their universities. And so, we’re really back to where we were before. Faculty can make their work freely available if they want to – and now have an extra way to do it. But if they don’t want to, they don’t have to.

The only way this is going to change is if universities create mandatory open access policies – with no opt-outs or exceptions. But this would likely require actions from university administrators who have, for decades, completely ignored this issue.

So don’t get me wrong. I’m happy the faculty senate at UC did something, and I think the eScholarship repository will likely become an important source of scholarly papers in many fields, and the use of CC licenses is great. And maybe the opt out will be eliminated as the policy is reviewed (I doubt it). But, because of the opt out, this is a largely symbolic gesture – a minor event in the history of open access, not the watershed event that some people are making it out to be.

Posted in open access, public access, University of California | 22 Responses

Those who deny access to history are condemned repeatedly

One of the most disappointing aspects of the push for open access to scholarly works has been the role of scholarly societies – who have, with precious few exceptions, emerged as staunch defenders of the status quo.

In the sciences – where most of the open access battles have been fought – anti-OA stances from societies have been driven by the desire to protect revenue streams from society-run journals. I had always hoped that the humanities – less corrupted by money as they are – would embrace openness in ways that science has been slow to do. Ahh for the naïveté of youth.

At my own institution – UC Berkeley – efforts to pass a fairly tepid “open access” policy were thwarted by humanities scholars who felt a requirement that faculty at public institution make their work publicly available represents some kind of assault on academic freedom. But that is nothing compared to an absurd statement released this week by the American Historical Association.

The gist of the AHA’s statement is this: they want universities that require their recently minted PhD’s to make copies of their theses freely available online to grant a special exemption to historians, allowing them to embargo access to their work for up to six years.

The ostensible reasons for this embargo request is to defend the ability of junior faculty to get their theses published in book form by a scholarly press – something they claim online access precludes. Here is their explanation:

By endorsing a policy that allows embargos, the AHA seeks to balance two central though at times competing ideals in our profession–on the one hand, the full and timely dissemination of new historical knowledge; and, on the other, the unfettered ability of young historians to revise their dissertations and obtain a publishing contract from a press.  We believe that the policy recommended here honors both of these ideals by withholding the dissertation from online public access, but only for a clearly stated, limited amount of time, and by encouraging other, more traditional forms of availability that would insure a hard copy of the dissertation remains accessible to scholars and all other interested parties.

They are basically arguing that, because of the tenure practices of universities, the history literature should remain imprisoned in print form – and that scholars without access to print copies should be denied timely access to this material - unless you think six years is timely.

What really galls me about this is that the AHA takes the way that academia works as a given. Yes, IF university presses refuse to publish books based on theses available online, and IF universities require such books for tenure, then young historians whose theses are made available online without an embargo are at a disadvantage. I’ve heard this from lots of young humanities scholars – and while I would dispute the extent to which it’s true, people really feel this way.

But shouldn’t the response to this sad situation by the leading organization representing academic historians – many of whom are in leadership positions at universities across the country – be to, you know, actually lead? Instead of a reactionary call for embargoes, they SHOULD have said something like this:

The way scholars in our field are evaluated is broken – so broken, in fact, that a young scholar in our field feels immense pressure to hide their work from public view for years so that they can cater to antiquated policies from our presses and our universities. The inability of our field to take full advantage of the internet as a means of dissemination should be a wakeup call for all of us in the field – and the AHA is committed to using our pull, and that of our members, to reform our presses and alter the rules for tenure at our institutions as rapidly as possible.

Shame on the AHA for being yet another scholarly society to let down the scholars they represent.

Posted in open access | 2 Responses

New Preprint: Uniform scaling of temperature dependent and species specific changes in timing of Drosophila development

We posted a new preprint from the lab on arXiv and would love your comments.

This work was born of our efforts to look at evolution of transcription factor binding in early embryos across Drosophila. When we started doing experiments comparing the three most commonly studied species, the model D. melanogasterD.pseudoobscura and D. virilis, we quickly ran in to issue: even though these species look superficially fairly similar, and develop in roughly the same way, they don’t really like to live at the same temperature, and even when they are grown in common conditions, they develop at different rates. So, for example, in order to collect an identical sample of stages from D. melanogater and the slower-developing D. virilis, you have to collect for different amounts of time- and we had no real idea of how this would affect the measurements we are making. And if you want to compare the tropical D. melanogaster to the cold-preferring D. pseudoobscura, you can either choose to collect at temperatures that neither prefers (21C) or grow them under different conditions, again with no clear understanding of how these differences affect our measurements.

So, a few years ago, a new postdoc in the lab (Steven Kuntz) decided to look at this question in more detail. He first developed methods to take time-lapse movies of developing embryos at carefully controlled temperatures, and then proceeded to characterize the development of 11 Drosophila species (all with fully-sequenced genomes) from different climates at eight temperatures ranging from 17.5C to 35C. He then developed a combination of manual and automated ways to identify 34 key developmental landmarks in each movie.

As was already well known, D. melanogaster development accelerates at higher temperatures taking around 2,000 minutes at 17.5C but just over 1,000 minutes at 32.5C.

Timing of D. melanogaster development at different temperatures

We observed similar overall trends for other species, with the other tropical species (D. simulansD. sechelliaD. erecta, D. yakubaD. ananassae and D. willistoni) showing similar patterns to D. melanogaster, while the temperature (D. virilis and D. mojavensis) and alpine (D. pseudoobscura and D. persimilis) were consistently slower even when grown at identical temperatures. The tropical species all started to show effects of high temperature (lower viability and slower development) at 32.5C, while the alpine species showed even greater effects at the cooler 30C.

Effects of temperature on development time for 11 Drosophila species

 

There’s a lot more in the paper about both of these issues, but the thing that I find really amazing, is that despite all of this variation in developmental timing both between species and at different temperatures, the relative timing of the 34 events we measured was virtually identical in all species and conditions. Indeed we find no statistically significant differences in the relative timing of any event between the initial cellularization of the blastoderm and hatching.

Proportional developmental time between species and at different temperatures

 

I find this almost perfect conservation of the relative timing of development across these diverse species and conditions stunning – and very much counter to what I expected – which was that different stages, which involve very different molecular and cellular processes, would be differentially affected by temperature, and that either selection or drift would have led to variation in relative timing between species. While there are lots of possibile explanations for this phenomena, the most straightforward is that developmental timing is controlled by some kind of master clock that scales with first-order kinetics with temperature, and which is the major target for interspecies differences in developmental timing. If true, this would be quite remarkable.

If you’ve gotten this far, you’re obviously reasonably interested in the topic. As I’ve written before, we are now posting all of our lab’s paper on arXiv prior to submitting them to a journal, and we invite you comments and criticism, with the hope that this kind of open peer review will not only make this paper better, but will serve as a model for the way we all should be communicating our work with our colleagues and interacting with them to discuss our work after it is published.

We’re going to try out PubPeer for comments on this paper. Please use this link to comment.

Posted in EisenLab, EisenLab preprints | 3 Responses

A CHORUS of boos: publishers offer their “solution” to public access

As expected, a coalition of subscription based journal publishers has responded to the White House’s mandate that federal agencies develop systems to make the research they fund available to public by offering to implement the system themselves.

This system, which they call CHORUS (for ClearingHouse for the Open Research of the United Status) would set up a site where people could search for federally funded articles, which they could then retrieve from the original publisher’s website. There is no official proposal, just a circulating set of principles along with a post at the publisher  blog The Scholarly Kitchen and a few news stories (1,2), so I’ll have to wait to comment on details. But I’ve seen enough to know that this would be a terrible, terrible idea – one I hope government agencies don’t buy in to.

The Association of American Publishers, who are behind this proposal, have been, and continue to be, the most vocal opponent of public access policies. They have been trying for years to roll back the NIH’s Public Access Policy and to defeat any and all efforts to launch new public access policies at the federal and state levels. And CHORUS does not reflect a change of heart on their part – just last month they filed a lengthy (and incredibly deceptive) brief opposing a bill in the California Assembly would provide public access to state funded research.

Putting the AAP in charge of implementing public access policies is thus the logical equivalent of passing a bill mandating background checks for firearms purchasing and putting the NRA in charge of developing and operating the database. They would have no interest in making the system any more than minimally functional. Indeed, given that the AAP clearly thinks that public access policies are bad for their businesses, they would have a strong incentive to make their implementation of a public access policy as difficult to use and as functionless as possible in order to drive down usage and make the policies appear to be a failure.

You can already see this effect at work  - the CHORUS document makes no mention of enabling, let alone encouraging, text mining of publicly funded research papers, even though the White House clearly  stated that these new policies must enable text mining as well as access to published papers. Subscription publishers have an awful track record in enabling reuse of their content, and nobody should be under any illusions that CHORUS will be any different.

The main argument the CHORUS publishers are making to funding agencies is that allowing them to implement a solution will save the agencies money, since they would not have to develop and maintain a system of their own, and would not have to pay to convert author manuscripts into a common, distributable format. But this is true only if you look at costs in the narrowest possible sense.

First, there is no need for any agency to develop their own system. The federal government already has PubMed Central – a highly functional, widely used and popular system. This system already does everything CHORUS is supposed to do, and offers seamless full-text searching (something not mentioned in the CHORUS text), as well as integration with numerous other databases at the National Library of Medicine. It would not be costless to expand PMC to handle papers from other agencies, and there would be some small costs associated with handling each submitted paper. However, these costs would be trivial compared to the costs of the funding the research in question, and would produce tremendous value for the public. What’s more, most of these costs would be eliminated if publishers agreed to deposit their final published version of the paper directly to PMC – something most have steadfastly refused to do.

But even if we stipulate that running their own public access systems would cost agencies some money, the idea that CHORUS is free is risible. There is a reason most subscription publishers have opposed public access policies – they are worried that, as more and more articles become freely available, that their negotiating position with libraries will be weakened and they will lose subscription revenues as a consequence. Since a large fraction of these subscription revenues (on the order of 10%, or around $1 billion/year ) come from the federal government through overhead payments to libraries, the federal government stands to save far, far, far more money in lower subscription expenditures than even the most gilded public access system could ever cost to develop and operate.

CHORUS is clearly an effort on the part of publishers to minimize the savings that will ultimately accrue to the federal government, other funders and universities from public access policies. If CHORUS is adopted, publishers will without a doubt try to fold the costs of creating and maintaining the system into their subscription/site license charges – the routinely ask libraries to pay for all of their “value added” services. Thus not only would potential savings never materialize, the government would end up paying the costs of CHORUS indirectly.

Publishers desperately want the federal agencies covered by the White House public access policy to view CHORUS as something new and different – the long awaited “constructive” response from publishers to public access mandates. But there is nothing new here. Publishers proposed this “link out” model when PMC was launched and when the NIH Public Access policy came into effect, and it was rejected both time. Publishers hate PMC not because it is expensive, or even because it leads to a (small) drop in their ad revenue. They hate it because it works, is popular and makes most people who use it realize that we don’t really need publishers to do all the things they insist only they can do.

CHORUS is little more than window dressing on the status quo – a proposal that would not only undermine the laudable goals of the White House policy, but would invariably cost the government money. Let’s all hope this CHORUS is silenced.

 

 

Posted in AAP, open access, politics, public access, science | 32 Responses