Door-to-door subscription scams: the dark side of The New York Times

An article appeared on the front page of the Sunday New York Times purporting to expose a “parallel world of pseudo-academia, complete with prestigiously titled conferences and journals that sponsor them”.

The story describes the experience of some unnamed scientists who accepted an email invitation to a conference, which then charged them for participating, and of some other scientists who submitted papers to a journal they had never heard based on an email solicitation and were later charged hefty fees for doing so.

Somehow, in the mind of author Gina Kolata, this is all PLoS’s fault, quoting someone who calls this phenomenon the “dark side of open access”.

Here is her logic:

The number of these journals and conferences has exploded in recent years as scientific publishing has shifted from a traditional business model for professional societies and organizations built almost entirely on subscription revenues to open access, which relies on authors or their backers to pay for the publication of papers online, where anyone can read them.

Open access got its start about a decade ago and quickly won widespread acclaim with the advent of well-regarded, peer-reviewed journals like those published by the Public Library of Science, known as PLoS. Such articles were listed in databases like PubMed, which is maintained by the National Library of Medicine, and selected for their quality.

But some researchers are now raising the alarm about what they see as the proliferation of online journals that will print seemingly anything for a fee. They warn that nonexperts doing online research will have trouble distinguishing credible research from junk. “Most people don’t know the journal universe,” Dr. Goodman said. “They will not know from a journal’s title if it is for real or not.”

There’s so much that is wrong with this I don’t know where to start.

First, this IS a real phenomenon. I get several emails every day from some dubious conference inviting me to speak or some sketchy journal asking me to be on their editorial board or to submit an article. However these solicitations are so obviously not legit, I can’t believe anyone falls for them. To suggest this is some kind of dangerous trend based on a few anecdotes is ridiculous.

And yes, a lot of these suspect journals charge authors for publishing their works, just like open access journals like PLoS do. But suggesting, as the article does, that scam conferences/journals exist because of the rise of open access publishing is ridiculous. It’s the logical equivalent of blaming newspapers like the NYT for people who go door-to-door selling fake magazine subscriptions.

Long before the Internet, publishers discovered that launching new journals was like printing money – something Elsevier specialized in for decades, launching hundreds of new journals with hastily assembled editorial boards and then turning around and demanding that libraries subscribe to these journals as part of their “Big Deal” bundles of journals. These journals succeeded because there are always researchers looking for a place to put their papers, and many of these new journals greased the wheels by having fairly lax standards for publication.

The same is true for conferences. For as long as I can remember I’ve been receiving solicitations to attend and/or speak at conferences organized by for-profit firms like Cambridge Health Tech that seem to cobble together sets of speakers from whomever they could get to accept – taking advantage of scientists’ desire to put “invited speaker” on their CVs – and then charging scientists, often from industry where travel budgets are bigger, to attend. I am sure some of these meetings are useful to some people (I’ve never been to meetings like this, some people tell me they’re basically junkets with little scientific merit, others say they are very useful) – but the idea that profiteering on people’s desire for prestige in science is something that came onto the scene with open access publishing is patently absurd.

The real explanation for the things described in the article is that it’s insanely easy to create conferences and journals and to send out blasts of emails to thousands of scientists hoping a few will take the bait. It’s science’s version of the Nigerian banking scams – something far more deserving of laughter than hand-wringing on the front page of the NYT.

But if Gina Kolata and the NYT are really concerned about scams in science publishing, they should look into the $10 BILLION DOLLARS of largely public money that subscription publishers take in every year in return for giving the scientific community access to the 90% of papers that are not published in open access journals – papers that scientists gave to the journals for free!  This ongoing insanity not only fleeces huge piles of cash from government and university coffers, it denies the vast majority of the planet’s population access to the latest discoveries of our scientists. And if the price we pay for ending this insanity is a few gullible scientists falling for open access spam, it’s worth it a million times over.

Posted in open access | Comments closed

Toxoplasma, Cat Piss and Mouse Brains: my lab’s first paper on microbial manipulation of animal behavior

All animals live in a microbe rich environment, with immense numbers of bacteria, archaea, fungi and other eukaryotic microbes living in, on and around them. For some of these microbes, the association is transitory and unimportant, but many make animals their permanent home, or interact with them in ways that are vital for their survival. Many members of an animal’s “microbiome” are affected by, and often become dependent on, aspects of the animal’s behavior. And, as microbes will do, some – and we believe many – of these microbes have evolved specific ways to manipulate the behavior of their animal neighbors to their advantage.

My lab has begun to study several such systems, seeking to discover the molecular mechanisms that underly these fascinating microbial adaptations – none of the several dozen cases in which microbial manipulation of animal behavior has been documented are understood in molecular detail.

One of these systems involves the eukaryotic parasite Toxoplasma gondii which reproduces clonally in most (if not all) warm blooded animals, but – for unknown reasons – only reproduces sexually in the digestive system of cats. Thus, in order to complete the Toxo lifecycle, an infected animal has to be eaten by a cat. This creates a conflict of interest between Toxo, who wants its host to be eaten by a cat, and the host, who would rather NOT be eaten by a cat. Indeed this “I don’t want to be eaten by a cat” effect is so strong, that many animals have evolved an innate fear of all things cat – especially their smells.

For example, if you take a laboratory mouse and put him (for a variety of reasons we usually do these experiments with males) in a box with a bowl or water they will largely ignore it. Swap out the water and put in something that the mouse has no reason to fear – like rabbit urine – and they still more or less ignore it. Swap that out and put in cat urine, and it’s a whole different ball game – the mouse spend most of its time on the other side of the cage.

Amazingly, it seems that rodents infected with Toxo lose this innate fear of cats – possibly as a result some property Toxo has evolved to increase the likelihood that it will end up in a cat’s tummy. Several papers have come out on the topic in recent years (from Robert Sapolsky’s lab at Stanford as well as others), but the molecular mechanism is unknown.

A graduate student – Wendy Ingram – became obsessed with this phenomena and is pursuing it as a joint project between my lab and that of Ellen Robey (an immunologist who studies the host response to Toxo infection and the ways the parasite evades immune surveillance).  Wendy has begun a bunch of experiments to examine this phenomenon – she is interested, in particular, in the role the immune system might play in mediating this response. Her first wave of experiments is done, and we have posted a preprint of a paper describing them on the arxiv.

Wendy first showed that the behavioral effect is robust, and is general across Toxo (previous experiments had used only one of the three major North American variants of Toxo – Wendy showed the same effect in the other two subtypes. But more interestingly, Wendy found that the effect was strong and persists for months in an attenuated Toxo strain that – unlike the other strains we and others have examined – is not detectable in the brains of infected animals after a few weeks. This would seem to refute – or at least make less likely – models in which the behavior effects is the result of direct physical action of parasites on specific parts of the brain. It’s just a start in trying to dissect a complicated phenomenon, but Wendy has a whole slew of followup experiments under way or in planning that should shed more light on what aspects of the innate fear response are being overridden and what, if any, role the immune system is playing.

As always, we welcome your thoughts and comments on the paper, released here as part of our commitment to make preprints of all of our lab’s papers available as soon as (if not before) we’re ready to submit them to a journal, and to make them available here for open peer review.

 

Posted in EisenLab preprints, microbial manipulation of animal behavior, My lab | Comments closed

The Past, Present and Future of Scholarly Publishing

I gave a talk last night at the Commonwealth Club in San Francisco about science publishing and PLoS. There will be an audio link soon, but, for the first time in my life, I actually gave the talk (largely) from prepared remarks, so I thought I’d post it here.

An audio recording of the talk with Q&A is available here.

——

On January 6, 2011, 24 year old hacker and activist Aaron Swartz was arrested by police at the Massachusetts Institute of Technology for downloading several million articles from an online archive of research journals called JSTOR.

After Swartz committed suicide earlier this year in the face of legal troubles arising from this incident, questions were raised about why MIT, whose access to JSTOR he exploited, chose to pursue charges, and what motivated the US Department of Justice to demand jail time for his transgression.

But the question that should have been asked is why downloading scholarly research articles was a crime in the first place. Why, twenty years after the birth of the modern Internet, is it a felony to download works that academics chose to share with the world?

The Internet, after all, was invented so that scientists could communicate their research results with each other. But while you can now get immediate, free access to 675 million videos of cats (I checked this number today), the scholarly literature – one of greatest public works projects of all time – remains locked behind expensive pay walls.

Every year universities, governments and other organizations spend in excess of $10 billion dollars to buy back access to papers their researchers gave to journals for free, while most teachers, students, health care providers and members of the public are left out in the cold.

Even worse, the stranglehold existing journals have on academic publishing has stifled efforts to improve the ways scholars communicate with each other and the public. In an era when anyone can share anything with the entire world at the click of a button, the fact that it takes a typical paper nine months to be published should be a scandal. These delays matter – they slow down progress and in many cases literally cost lives.

Tonight, I will describe how we got to this ridiculous place. How twenty years of avarice from publishers, conservatism from researchers, fecklessness from universities and funders, and a basic lack of common sense from everyone has made the research community and public miss the manifest opportunities created by the Internet to transform how scholars communicate their ideas and discoveries.

I will also talk about what some of us have been doing to liberate the scholarly literature – where we have succeeded and where there is more work to be done. And finally, with these efforts gaining traction, I will describe where we are going next.

While I talk, I want you to keep in mind that this is about more than just academic publications. This is about the future of the Internet and what we are willing to do, as individuals and societies, to ensure that information that should be free IS free. If we can’t figure out how to make scientific and scholarly works – most of which were funded by taxpayers and published by authors with no expectation of being paid – freely available, we will struggle to do it in cases where the conditions for free access are less ripe.

One last bit of introduction. I am a scientist, and so, for the rest of this talk, I am going to focus on the scientific literature. But everything I will say holds equally true for other areas of scholarship.

OK.

Most people date the birth of the modern scientific journal to the middle of the 17th century, when the Royal Society in England took advantage of the growing printing industry to begin publishing proceedings of their meetings for the benefit of members unable to attend, as well as for posterity.

But scholarly journals as we know them were really a product of the 19th century, when growing activity and public interest in science led to the creation of most of the big titles we know about today: Science, Nature, The New England Journal of Medicine, The Journal of the American Medical Association and The Lancet published their first editions in the 1800’s.

They had noble missions. For example, the preface to the first edition of Science in July 1880 stated that its goal was to  “afford scientific workers in the United States the opportunity of promptly recording the fruits of their researches, and facilities for communication between one another and the world”.

Like their predecessor, these journals were enabled by the technologies of the industrial revolution – steam powered rotary printing presses and efficient rail-based mail service. But they were also severely limited by them. Printing and shipping articles around the country and the world was expensive, and because of this, two key features of modern journals were established.

First, journals limited what they printed, choosing for publication only those works deemed to be of the greatest interest to their target audience. And second, they sold subscriptions – sending copies only to those who had paid. While intrinsically restricting, this business arrangement made sense. Every printed copy of a journal incurred a cost to the publisher, and charging readers meant revenues scaled with costs.

As science grew, so too did science publishing, with increasingly specific journals emerging to cater to new disciplines. By 1990 there were around 5,000 scientific journals in circulation, all of them printed and shipped to subscribers. And the costs were skyrocketing. If you were lucky enough to be at a major research university, you could find most of these journals in the library. But most scientists had to make do with a small subset – whatever their library could afford. And the public was all but completely shut out.

Then along came the Internet.

Scientific journals, serving a computer savvy audience with access to fast Internet connections through universities, were amongst the first commercial ventures to take advantage of this new technology. Within a few years – from 1995 to 1998 – virtually all major publishers put versions of their printed journals online.

But in doing so they made a crucial and fateful choice. Rather than adopting their business model to the new medium, they stuck with the same subscription-based system that they used for their print journals. And why not – so long as scientists were still giving them papers, and universities were buying them back, it was a great business. An even better one given that they no longer had to pay for printing and shipping.

But with this major shift in the means of dissemination, what was once a common sense way for publishers to provide a valuable service while dealing with the limitations of available technology became an irrational impediment to achieving this very goal.

To understand just how crazy this system is, you need to understand a bit more about how scientific journals work and what the life cycle of a scientific idea looks like.

Take your typical scientist at my home institution – the University of California Berkeley. She draws a salary from the state of California, and works in a building funded by the state. When she has a new idea, she goes out and raises money to buy equipment and supplies and to pay the salaries of the students and staff who will actually do the work. In all likelihood this money will come from the US government – through agencies like the NIH or NSF. And if not from them, from a public minded non-profit or foundation like the Howard Hughes Medical Institute that funds my lab. This scientist and her students then spend a great deal of time – usually years – pursuing the idea, until they finally have a result they want to share with their peers.

So they sit down and write a paper describing why they were interested in the question, what they did, how they did it, what they found, and what they think it means.

And then they hopefully submit it to one of the 10,000 journals currently in operation – choosing based on scope and importance. With few exceptions, these journals work the same way. The paper is assigned to an editor – sometimes a salaried professional, but usually a practicing scientist volunteering their time. They read the paper and decide who in the field is in the best position to evaluate the authors’ methods, data and conclusions. They send the paper to these scientists – who again are volunteering their time as a service to the community – who read it and render their opinion on the paper’s technical merits and suitability to the journal in question. The editor looks at all these reviews and decides whether to accept, modify or reject the work. If the paper is accepted, the journal takes the manuscript, converts it into a publishable form, and posts it on the web. If the paper is not accepted, the scientists either go back and do some more work and rewrite the paper, or they send it to another journal, triggering a complete reprise of the entire process.

I want you to note just how little the journal actually does here.

They didn’t come up with the idea. They didn’t provide the grant. They didn’t do the research. They didn’t write the paper. They didn’t review it. All they did was provide the infrastructure for peer review, oversee the process, and prepare the paper for publication. This is a tangible, albeit minor, contribution, that pales in comparison to the labors of the scientists involved and the support from the funders and sponsors of the research.

And yet, for this modest at best role in producing the finished work, publishers are rewarded with ownership of – in the form of copyright – and complete control over the finished, published work, which they turn around and lease back to the same institutions and agencies that sponsored the research in the first place. Thus not only has the scientific community provided all the meaningful intellectual effort and labor to the endeavor, they’re also fully funding the process.

Universities are, in essence, giving an incredibly valuable product  – the end result of an investment of more than a hundred billion dollars of public funds every year – to publishers for free, and then they are paying them an additional ten billion dollars a year to lock these papers away where almost nobody can access them.

It would be funny if it weren’t so tragically insane.

To appreciate just how bizarre this arrangement is, I like the following metaphor. Imagine you are an obstetrician setting up a new practice. Your colleagues all make their money by charging parents a fee for each baby they deliver. It’s a good living. But you have a better idea. In exchange for YOUR services you will demand that parents give every baby you deliver over to you for adoption, in return for which you agree to lease these babies back to their parents provided they pay your annual subscription fee.

Of course no sane parent would agree to these terms. But the scientific community has.

And the consequences are severe.

Even though the entire scientific and medical literature is, in principle, available at the click of a mouse to anyone with an Internet connection – very few people have access to the entirety of this information.

This is most obviously a problem for people facing important medical decisions who have no access to the most up-to-date research on their conditions – research their tax dollars paid for. In a world where patients are increasingly involved in health care decisions, and where all sorts of sketchy medical information is available online, it is criminal that they do not have access to high quality research on whatever ails them and potential ways to treat it.

Astonishingly, many physicians and health care providers also lack access to basic medical research. Journal subscriptions in medicine are very expensive, and most doctors have access to only a handful of journals in their specialty.

But this lack of access is not just important in the doctor’s office. Scores of talented scientists across the world are blind to the latest advances that could affect their research. And in this country students and teachers at high schools and small colleges are denied access to the latest work in the fields they are studying – driving them to learn from textbooks or Wikipedia rather than the primary research literature. Technology startups often can not afford to access to the basic research they are trying to translate into useful products.

And interested members of the public – like many of you – find it difficult to engage with scientific research. Is it any wonder that such a large fraction of the population rejects basic scientific findings when the scientific community thumbs its collective noses at the them by making it impossible for them to read about what we’re doing with all of their money? Many in the publishing industry dismiss the idea that the public even wants to read scientific papers, pointing to their often highly technical language. But a major reason these papers are so inscrutable is that their authors conceive of their audience very narrowly – basically scholars in their field. And if you have no expectation that the public will read your work, you do not write it to be accessible to the public.

But even if you have no interest in ever reading a scientific paper, you should care deeply about this issue. Because in addition to pay walls, the balkanization of the scientific literature into hundreds of publisher fiefdoms stops researchers from developing new ways to organize, extract information from and improve the navigability and utility of the scientific literature. It is astonishing, for example, that to this day there is no dedicated search engine that allows you to search the full-text of every published scientific paper. This makes researchers less effective and limits the value we all get from the billions of dollars we invest in science every year.

And the greatest tragedy of all is that this is completely unnecessary.

Back in the 1990’s several people began promoting a simple alternative model. The idea was to treat science publishing like a service, with publishers getting paid a fee for the value they provide, but once this fee is paid, the finished product would effectively enter the public domain rather than the publishers private one.

One of the people pushing this new model – now known as “open access” – was my postdoctoral advisor at Stanford, Pat Brown, who enlisted me in his crusade. After failing to convince existing publishers to adopt this model – they generally met this idea with laughter if not outright hostility –  the two of us, along with former NIH Director Harold Varmus, launched a non-profit publisher – which we dubbed the Public Library of Science or PLOS – determined to prove that this model would work.

After all, universities were already forking over billions of dollars to support publishers. We were offering them a better deal – access for everyone at a lower price. But, while logic and value were on our side, and we got statements of support from within and outside the scientific community, when push came to shove, only a small group of pioneers joined us. And the reason was that publishers had one very powerful card up their sleeve.

Although scientists do not get paid when the papers they submit to research journals get published, they nonetheless receive something of very high value. Academia is an industry of prestige, and the currency in which prestige is traded is journal titles. In most scientists’ minds, a publication in an elite journal like Nature or Science is as good as gold – a ticket to a job, grants and tenure. And the allure of these publications is so high that most scientists continue to choose journals based entirely on their prestige, even while they acknowledge that their business practices are bad for science and the world.

Realizing that our biggest obstacle was overcoming the prestige of established subscription based journals, PLOS launched with two journals that adopted the same elitist editorial policies of Science, Nature and their ilk – PLoS Biology for basic life sciences and PLoS Medicine for the clinical world. We hired professional editors from others in the industry, built fancy editorial boards and had a suite of Nobel Prize winners singing our praises.

But prestige is a difficult thing to engineer. Colleagues, friends and even family members would stipulate all the flaws in the current system and praise what we were doing, but, when they had a high profile paper, would turn around and send it to the same old subscription journals. It was a very frustrating experience.

I’d like to say that I understood why they made these decisions. But I didn’t. I thought – and still think – they were just being cowardly. And when I suggested they were being chickens by sending papers to Science or Nature they would complain that they couldn’t because their jobs – or their trainees jobs – were at stake.

I didn’t think they were right. But the truth is that I didn’t have a lot of evidence to show them. At the same time we were starting PLOS, I was starting my own lab in Berkeley. Senior colleagues, knowing about my extracurricular activities, took me aside and warned that I would never get grants or tenure if I didn’t publish my work in the old guard high profile journals, and that I would ruin the careers of my trainees if I put my principles over practical realities.

I didn’t want to believe them. I wanted to believe if I did good work people would notice. I wanted to believe that success in science did not require capitulating to stupid, destructive traditions. I also knew I’d look like a total hypocrite if I failed to live up to my own exhortations.

So I made a commitment that every paper from my would go to journals that made them freely available from day one. And, over 13 years, I have stuck completely to my pledge. And you know what? The sky didn’t fall. I got grants. Then I got a tenure track job at Berkeley (I had started out at the National Lab up the hill). Then I got tenure. And then I was named an investigator with the Howard Hughes Medical Institute – a coveted award that now funds most of my research. And the people in my lab have not suffered either. My graduate students have received fellowships and gone on to land plum postdoctoral positions – except for the one who went to Face Book and is now a millionaire – and my postdoctoral fellows have all gotten faculty positions at good schools.

But despite this, most of my colleagues still stand by the “I need to publish in Journal Blah in order to get” whatever goal they were seeking at the time.

Fortunately, publishing decisions are not entirely in the hands of individual investigators. In 2008, under pressure from Congress to provide taxpayers access to work they fund, the National Institutes of Health – who funds about $30 billion dollars of research every year – implemented a public access policy requiring that grantees make their work available through the National Library of Medicine.

This was an important landmark in the history of the access movement, as, for the first time, a major funding agency was making it a condition of receiving a grant that authors make their works available to the public. And the policy has been successful – 80% of NIH funded works published in 2011 are now freely available online – there’s nothing like the threat of losing funding to get people to do the right thing.

Unfortunately, under heavy lobbying pressure from publishers, the NIH policy allows for up to a years delay between publication and the provision of free access. While better than nothing, delayed access to the literature no more provides the public with access to the latest advances in biomedical research than handing out year old copies of the New York Times keeps everyone up to date on the latest World events.

And, again under pressure from Congress, earlier this year the Obama administration weighed in on the matter, directing other federal agencies that fund large amounts of research to develop their own public access policies. The White House said all the right things about the importance of public access – and got a lot of positive press. But unfortunately, if predictably, their actions did not match their words. The new White House policy all but established the one year delay used by the NIH as the law of the land – explicitly citing the need to sustain subscription-based publishing business as their excuse. Another huge missed opportunity in an area that has had tons of them.

But at least the White House did something. The other major player in this arena – the universities who employ the vast majority of academic scientists, and whose policies shape the course of their careers – have been completely silent. As with funding agencies, universities could hasten the transition to full and immediate open access by making it a condition of employment. Few people would turn down a job because it came with such a requirement.

But, while their own libraries sound the alarm about rising subscription costs and diminishing access, university administrators across the country have done next to nothing to promote changes in scientific publishing that would not only save them money, but make the research done on their campuses more efficient and effective. This is an astonishing abdication of their public mission and responsibility as stewards of scholarship.

However, despite these failings from scientists, funders and universities, the facts on the ground are changing rapidly. In 2007, PLOS launched a new journal – PLOS ONE – that not only provided open access to all of its content, but also dispensed with the notion – central to journal publishing since the 17th century – that journals should select only papers of the highest level of interest to their readers.

Rejecting papers that are technically sound is a relic of the age of printed journals, whose costs scaled with the number of papers they published and whose table of contents served as the primary way people found articles of interest.

But we are no longer limited by the number of articles we can publish, and people primary find papers of interest by searching, not browsing. So PLOS ONE asks its reviewers only to assess whether the paper is a legitimate work of science. If it is, it is published. The process is relatively simple – no need to ping pong from one journal to another in order to find the highest impact home.

This idea evidently appeals to the scientific community, because PLOS ONE has grown rapidly. It will publish in excess of 25,000 articles this year, and though only five years old, it is now the biggest biomedical research journal in the world. And it publishes great science – PLOS ONE articles are routinely talked about both by science journalists and the popular press.

And PLOS ONE has not just been a success as a journal, but also as a business, turning a profit that has not only put PLOS on solid financial footing, but attracted the eye of commercial and non-profit publishers worldwide. In the past year several PLOS ONE clones have been launched and there is broad consensus that this sector will grow and ultimately dominate scientific publishing.

But the battle is by no means won. Open access collectively represents only around 10% of biomedical publishing, has less penetration in other sciences, and is almost non-existent in the humanities. And most scientists still send their best papers to “high impact” subscription-based journals.

But as frustratingly slow as progress has been, I believe we are close to a tipping point with most members of the scientific community believing that open access is the future, and a growing and diverse set of publishers engaged in open access businesses.

But being able to access papers is just the beginning. We can now finally start to actually take advantage of computers and the Internet to not just make scientific publishing open, but to make it better.

If the 17th century founders of the Proceedings of the Royal Society went to read a contemporary scientific journal, they would find it disturbingly familiar. Even though we can read papers on a portable computer while flying 35,000 feet over the Pacific Ocean, the only thing that distinguishes a contemporary paper from a 17th century one is the occasional color photograph.

The multilayered, hyperlinked structure of the Web was made for scientific communication, and yet papers today are largely dispersed and read as static PDFs – another relic of the days of printed papers. We are working with the community to enable the “paper of the future”, that embeds not only things like movies, but access to raw data and the tools used to analyze them.

There is also no need for papers to be static works fixed in a single form at their time of publication. Good data and good ideas in science are constantly evolving, and scientific papers should evolve over time as new data, analyses, and ideas emerge – whether they support or refute the original assertions.

But the biggest target of our efforts is peer review. Peer review is the closest thing science has to a religious doctrine. Scientists believe that peer review is essential to maintaining the integrity of the scientific literature, that it is the only way to filter through millions of papers to identify those one should read, and that we need peer reviewed journals to evaluate the contribution of individual scientists for hiring, funding and promotion.

Attempts to upend, reform or even tinker with peer review are regarded as apostasies. But the truth is that peer review as practiced in the 21st century poisons science. It is conservative, cumbersome, capricious and intrusive. It encourages group think, slows down the communication of new ideas and discoveries, and has ceded undue power to a handful of journals who stand as gatekeepers to success in the field.

Each round of reviews takes a month or more, and it is rare for papers to be accepted without demanding additional experiments, analyses and rewrites, which take months or sometimes years to accomplish.

And this time matters. The scientific enterprise is all about building on the results of others – but this can’t be done if the results of others are languishing in peer review. There can be little doubt that this delay slows down scientific progress and often costs lives.

This might be worth it if these delays made the ultimate product better. But it is not the case. While I am sure that some egregious papers are prevented from being published by peer review, the reality is that with 10,000 or so journals out there, most papers ultimately get published, and the peer reviewed literature is filled with all manner of crappy papers. Even the supposedly more rigorous standards of the elite journals fail to prevent flawed papers from appearing in their pages.

So, while it is a nice idea to imagine peer review as defender of scientific integrity – it isn’t. Flaws in a paper are far more often uncovered after the paper is published than in peer review. And yet, because we have a system that places so much emphasis on where a paper is published, we have no effective way to annotate previously published papers that turn out to be wrong.

And as for classification, does anyone really think that assigning every paper to one of 10,000 journals, organized in a loose and chaotic hierarchy of topics and importance, is really the best way to help people browse the literature?  This is a pure relic of a bygone era – an artifact of the historical accident that Gutenberg invented the printing press before Al Gore invented the Internet.

So what would be better? The outlines of an ideal system are simple to spell out. There should be no journal hierarchy, only broad journals like PLOS ONE. When papers are submitted to these journals, they should be immediately made available for free online – clearly marked to indicate that they have not yet been reviewed, but there to be used by people in the field capable of deciding on their own if the work is sound and important.

The journal would then organize a different type of peer review, in which experts in the field were asked if the paper is technically sound – as we currently do at PLOS ONE – but also what kinds of scientists would find this paper interesting, and how important should it be to them. This assessment would then be attached to the paper – there for everyone to see and use as they saw fit, whether it be to find papers, assess the contributions of the authors, or whatever.

This simple process would capture all of the value in the current peer review system while shedding most of its flaws. It would get papers out fast to people most able to build on them, but would provide everyone else with a way to know which papers are relevant to them and a guide to their quality and import.

By replacing the current journal hierarchy with a structured classification of research areas and levels of interest, this new system would undermine the generally poisonous “winner take all” attitude associated with publication in Science, Nature and their ilk. And by devaluing assessment made at the time of publication, this new system would facilitate the development of a robust system of post publication peer review in which individuals or groups could submit their own assessments of papers at any point after they were published. Papers could be updated to respond to comments or to new information, and we would finally make the published scientific literature as dynamic as science itself. And it would all be there for anyone, anywhere to not just access, but participate in.

There is nothing technically challenging about building such a system, and it makes so much sense that it can’t help but happen. But, of course, we’ve been there before. Science is oddly conservative, and there is enough money and power at stake to ensure that people will try to stop this from happening. So if you care about making the scientific literature open and accessible, I urge you to do whatever you can to make it happen. If you’re a scientist, get with the program – there are so many open access options around today, you no longer have any excuse. And try to stop looking at journal titles when you evaluate people and their work. It’s a poisonous process that has to stop.

If you’re not a scientist, but are interested in this cause, you can do all the normal things – write your members of Congress and the such. But I also encourage you to find scientists whose work you find interesting, but can not access, and send them an email. Or better yet, give them a call. Let them know you want to – but can not – read their work. And remind them that, in all likelihood, you paid for it.

If we all do this, them maybe the next time someone like Aaron Swartz comes along and tries to access every scientific paper ever written, instead of finding the FBI, they’ll find a giant green button that says “Download Now”.

Posted in open access, PLoS, science | Comments closed

The Immortal Consenting of Henrietta Lacks

Rebecca Skloot has an essay in today’s New York Times discussing the recent publication of the genome sequence of a widely used human cell line. Skloot, as most of you already know, wrote a book about the history this cell line  – known as HeLa for Henrietta Lacks, the woman from whom they were obtained.

In her book, Skloot describes how the cells were taken from Lacks, who was dying of agressive ovarian cancer, without her knowledge or consent, and how the family was kept in the dark about the cells for decades, even as they researchers showed up to take samples from Lacks’ descendants. Skloot has done a wonderful job of not only gaining the Lacks family’s support for her book, but of engaging them with the legacy of Henrietta’s unwitting contribution to science and medicine.

So it makes sense that Skloot would take umbrage with the release of the complete sequence of HeLa cells, published without the consent of knowledge of the Lacks family. I can understand how this happened – HeLa cells are so ubiquitous in the lab, it’s easy to forget that they come from a real person (although it’s hard to believe the authors of the paper hadn’t read, or at least heard of, Skloot’s book). But it’s really not acceptable, something the authors now realize and are trying to correct.

Unfortunately, Skloot’s NYT essay on this topic was muddled – conflating two distinct issues – one very general, the other specific to HeLa cells – that have to be dealt with separately.

The first issue is one of consent from Henrietta Lacks to sequence and publish the genome of cells derived from her body. As Skloot made very clear in her book, no such consent was obtained. And, since Lacks died a long time ago, it can not be obtained. Lots of people, including Skloot, point out that consent was neither required nor generally obtained in the 1950’s when Lacks was sick. And knowing that Lacks was a poor African-American woman, it’s hard not to see more sinister overtones her treatment.

To me, there really is no moral question here. We should not be using HeLa cells because no consent was obtained to take them. And I am very uncomfortable with the general idea that heirs/descendants should be allowed to retroactively consent for a dead relative. Nothing that can happen now or in the future can make up for the lack of real consent. But whether they should be used or not, these cells are being used all over the planet. Given that this is unlikely to change, there’s really no choice but to de facto give the Lacks family a kind of proxy consenting power to act on Henrietta’s behalf.

However Skloot’s piece glides from the issue of how to retroactively get Henrietta’s permission to experiment with and publish about her cells to the seemingly related  issue of whether publication of the HeLa cell genome is an invasion of the privacy of Lacks’ living relatives. Skloot repeatedly raises the issue of all the things we can learn about an individual and their relatives by sequencing their DNA, and whether family members should have some kind of veto power over the publishing of a relatives genome.

But this is a very different than the question of how to obtain consent from an individual who is not longer alive. To see why, lets stipulate that Henrietta Lacks had consented to all these studies – had, in sound mind, given permission for the doctors to take her cell lines, establish cultures, send them around the world to be used for any purpose and to freely publish the results of any studies on these cells. Would you still require the authors of the paper to consent Lacks’ family?

Skloot clearly thinks the answer is yes – positing that publishing any individual’s genome sequence is intrinsically   an invasion of the privacy of their relatives – whether or not the sequenced individual consented to the process. Hence this quote:

“That is private family information,” said Jeri Lacks-Whye, Lacks’s granddaughter. “It shouldn’t have been published without our consent.”

This has nothing nothing to do with the history of Henrietta Lacks and HeLa cells. It is an active assertion about familial privacy rights that would – if you accept it – be just as true if the paper in question had described the sequencing of anyone else’s genome. Why weren’t the same issues raised when the genome belonged not to Henrietta Lacks, but to Jim Watson or Craig Venter?

I find the way Skloot’s NYT piece moves back and forth between the historical transgressions against Henrietta Lacks and the contemporary threat to her relatives’ privacy incredibly misleading. I doubt this was intentional – rather I think it reflects muddled thinking on her part about these issues. But either way, by juxtaposing the entirely justifiable empowering of the Lacks family to grant individual consent on Henrietta’s behalf with the desire of the same family to protect its genetic privacy, Skloot is implying that these are one and the same – that we should give ANY family the right to veto the publication of a relative’s genome.

But this is a logical fallacy. We probably all agree that the Lacks family should have been consulted about the publication of the HeLa genome because Henrietta herself never gave such permission. And some of you (not me) may think that a family’s right to genetic privacy trumps the right of an individual to publish their genome. But the former does not, in any way, imply the latter, and I think Skloot did the conversation around these issues a huge diservice by conflating them in such a prominent way.

Posted in bioethics, publishing, race, science | Comments closed

Another paper ready for open review: comparative ChIP-seq and RNA-seq in Drosophila embryos

As I wrote about for our last paper, I hate the way scientific publishing works today, especially the insane delays (average is about 9 months) between when a lab is ready to share its work and when the work is actually available. So, from now on we are going to post all of our papers online when we feel they’re ready to share – before they go to a journal. We’ll then solicit comments from our colleagues and use them to improve the work prior to formal publication.Physicists and mathematicians have been doing this for decades, as have increasing number of biologists. It’s time for this to become standard practice.

Ground rules: I will not filter comments except to remove obvious spam. You are welcome to post comments under your name or under a pseudonym – I will not reveal anyone’s identity – but I urge you to use your real name as I think we should have fully open peer review in science. The original paper and comments will remain available here as a record of the review process.

Paris M et al. (2013). Gene expression in early Drosophila embryos is highly conserved despite extensive divergence of transcription factor binding. Full manuscript. Text only. Figures only.

The paper is now available at arXiv. Please use the arXiv version for formal citations.

This paper is the result of several years of work from Mathilde Paris, a very talented postdoctoral fellow in my lab. Mathilde was interested in looking at the evolution of transcription factor binding in highly diverged Drosophila species and the effect of changes in transcription factor binding on gene expression. So she carried out a series of chromatin immunoprecipitation experiments using antibodies raised against four D. melanogaster proteins involved in early anterior-posterior (head -> tail) patterning. She carried out ChIP-seq experiments in D. melanogaster as well as D. pseudoobscura (diverged ~30mya) and D. virilis (diverged ~40mya). There were a lot of technical challenges in getting these experiments to work to our satisfaction (described in the methods section of the paper), but eventually Mathilde had a dataset in which we had sufficient confidence to analyze in detail.

The most striking observation about the ChIP data is just how different the binding patterns of these factors are in these different species, which, for all intents and purposes, undergo identical early developmental processes. We can identify two clear factors driving this divergence: the gain and loss of binding sites for these two factors (for background on binding site turnover see this 2008 paper from our lab), and the gain and loss of binding sites for the early embryonic master regulator Zelda (see this 2011 paper from our lab for more information about Zelda). However, these two effects did not completely explain the observed divergence, which may also be influenced by environmental factors (the species do not all develop at the same temperatures or same rates) and developmental, biochemical and experimental noise.

In contrast to the divergence of transcription factor binding, gene expression in stage-matched embryos is highly conserved. And one of the central issues discussed in the paper is why there is this discordance between transcription factor binding and gene expression divergence.

As always, we await your comments, and will respond as quickly as we can.

Posted in EisenLab, open access | Comments closed

No celebrations here: why the White House public access policy is bad for open access

I am taking a lot of flak from my friends in the open access community about my sour response to the White House’s statement on public access to papers arising from federally-funded scientific research.

While virtually everyone in the open access movement is calling for “celebration” of this “landmark” event, I see a huge missed opportunity that will ultimately be viewed as a major setback for open access. Since I seem to be the only person with this point of view, I feel I should explain why.

The statement was nominally triggered by a petition posted on the White House’s “We the People” page last May calling for greater access to the results of federally funded research, pointing to the successful NIH public access policy as a model for other agencies.

Under this new White House directive, all federal agencies with R&D budgets in excess of $100,000,000 will have to develop their own public access policies that will “ensure the public can read, download, and analyze in digital form” published works arising from federally-funded research within 12 months of publication.

There is no doubt this is a good thing. Once the new policies are implemented everyone will have access to the full range outputs of federally funded research. That is better than what is available today. So why aren’t I dancing in the streets?

When the NIH announced its public access policy in 2008, this truly was a landmark event. The biggest funder of non-classified scientific research in the world (The NIH research budget is around $30b/year) was acting to ensure public access to the entire body of its funded works. The policy was imperfect – it allowed a 12 months embargo, and had no provisions for reuse of the works. But this was big news – the instantiation of a new right – the right of the public to access the results of taxpayer funded research.

And the NIH policy has been very successful. The research community has accepted the mandate with nary a hitch – over 80% of NIH funded works end up in PubMed Central, the NIH’s open archive of scientific journal articles – and the database is heavily accessed by both researchers and the public.

It should have been a complete no brainer for other federal agencies to follow the NIH’s pioneering actions. But sadly, none did. And given the remarkable progress in open access that has happened in the intervening five years, for the White House to merely extend the NIH policy to other agencies is lame, retrograde action.

And it’s even worse than that. When the NIH policy was announced, people like me who believe that publicly funded works should be immediately freely available looked at the 12 month embargo period as a kind of opening bid – a concession to publishers that was necessary to get the policy off the ground, but which would ultimately disappear.

But now the White House has taken the 12 months embargo period and reified it.Year long delays are no longer an experiment by one agency. They are, in effect, the law of the land.

And why, after so clearly articulating the importance of public access in the begininning of their policy announcement, did the White House ultimately sell out the public? Here is what they say:

The Administration also recognizes that publishers provide valuable services, including the coordination of peer review, that are essential for ensuring the high quality and integrity of many scholarly publications. It is critical that these services continue to be made available.

The administration fell hook line and sinker for the ridiculous argument put forth by publishers that the only way for researchers and the public to get the servies they provide is to give them monopoly control over the articles for a year – the year when they are of greatest potential use.

Think about how absurd this is. Publishers, whose role should be to disseminate information as widely as possible, are now the only reason why the public will continue to not have access to research results their tax dollars paid for.

The White House chose this path even though there is now ample evidence that this concession is unnecessary. PLoS, BioMed Central and many other open access publishers have proven that publishers can create healthy businesses that provide all the services people value without ever restricting access to the papers they publish.

That the White House chose to ignore the rise of open access publishing and allow 12 month embargoes to persist shows that they care more about industries with well payed lobbyists than they do about the public good. And if you have any doubt that the publishers got what they wanted out of this policy, you only have to read the response of the Association of American Publishers – an industry group that has long opposed any moves towards public access and has backed repeated efforts to repeal the NIH policy:

The Association of American Publishers supports the Policy on Access to Research Outputs, released today by the White House Office of Science and Technology Policy (OSTP), which outlines a reasonable, balanced resolution of issues around public access to research funded by federal agencies.

Clearly the publishers got what they wanted out of the White House. And do you really think it’s going to stop there? They have established their ability to corrupt policy making, and will continue to exploit it. I predict that as these policies are implemented in different agencies, that they will be heavily tilted towards what the publishers want. There will be no central archives – just links out to publishers websites. And there will be pressure to increase – not decrease – embargo periods. The publishers are already laying the groundwork for this in their statement:

The key to the success of the policy, however, depends on how the agencies use their flexibility to avoid negative impacts to the successful system of scholarly communication that advances science, technology and innovation.

It’s sad. Had the White House actually looked at the landscape of scientific publishing with an eye towards maximizing public access, they would have realized that embargoes  are completely unnecessary. They could easily have come out with a policy that said:

From this point onward, the federal government will operate with a simple principle. Whenever the taxpayers of the United States sponsor scientific research, the results of this research will be immediately available to everyone.

Instead, once again, our government let us down, allowing a dying, useless industry to dictate policy that serves to line their pockets at the expense of the public good. And so I ask my friends in the open access movement, and everyone who cares about ensuring that the scientific research is as accessible and useful as it can be, is this really something you want to be celebrating?

Posted in open access, politics, science | Comments closed

Please review our new paper: Sequencing mRNA from cryo-sliced Drosophila embryos to determine genome-wide spatial patterns of gene expression

It’s no secret to people who read this blog that I hate the way scientific publishing works today. Most of my efforts in this domain have focused on removing barriers to the access and reuse of published papers. But there are other things that are broken with the way scientists communicate with each other, and chief amongst them is pre-publication peer review. I’ve written about this before, and won’t rehash the arguments here, save to say that I think we should publish first, and then review. But one could argue that I haven’t really practiced what I preach, as all of my lab’s papers have gone through peer review before they were published.

No more. From now on we are going to post all of our papers online when we feel they’re ready to share – before they go to a journal. We’ll then solicit comments from our colleagues and use them to improve the work prior to formal publication. Physicists and mathematicians have been doing this for decades, as have an increasing number of biologists. It’s time for this to become standard practice.

Some ground rules. I will not filter comments except to remove obvious spam. You are welcome to post comments under your name or under a pseudonym – I will not reveal anyone’s identity – but I urge you to use your real name as I think we should have fully open peer review in science.

OK. Now for the paper, which is posted on arxiv and can be linked to, cited there. We also have a copy here, in case you’re having trouble with figures on arXiv.

Peter A. Combs and Michael B. Eisen (2013). Sequencing mRNA from cryo-sliced Drosophila embryos to determine genome-wide spatial patterns of gene expression. 

Several years ago a postdoc in my lab, Susan Lott (now at UC Davis) developed methods to sequence the RNA’s from single Drosophila embryos. She was interested in looking at expression differences between males and females in early embryogenesis, and published a beautiful paper on that topic.

Although we were initially worried that we wouldn’t be albe to get enough RNA from single embryos to get reliable sequencing results, it turns out we got more than enough. Each embryo yielded around 100ng of total RNA, and we would end up loading only ~10% of the sample onto the sequencer. So it occurred to us that maybe we could work with material from pieces of individual embryos and thereby get spatial expression information on a genomic scale in a single quick experiment – an alternative to highly informative, but slow imaging-based methods.

I recruited a new biophysics student, Peter Combs, to work on slicing embryos with a microtome along the anterior-posterior axis and sequencing each of the sections to identify genes with patterned expression along the A-P axis. In typical PI fashion, I figured this would take a few weeks, but it ended up taking over a year to get right.

The major challenge was that, while a tenth of an embyro contains more than enough RNA to analyze by mRNA-seq, it turned out to be very difficult to shepherd that RNA successfully from a single cryosection to the sequencer. Peter was routinely failing to recover RNA and make libraries from these samples using methods that worked great for whole embryos. While there are various protocols out there claiming to analyze RNA from single cells, we were reluctant to use these amplification-based strategies.

The typical way people deal with loss of small quantities of nucleic acids during experimental manipulation is to add carrier RNA or DNA – something like tRNA or salmon sperm DNA. We didn’t want to do that, since we would just end up with tons of useless sequencing reads. So we came up with a different strategy – adding embryos from distantly related Drosophila species to each slice at an early stage in the process. This brought the total amount of RNA in each sample well amove the threshold where our purification and library preparation worked robustly, and we could easily separate the D. melanogaster RNA we were interested in for this experiment from that of the “carrier” embryo. But we could avoid wasting sequencing reads by turning the carrier RNAs into an experiment of their own – in this case looking at expression variation between species.

With this trick, the method now works great, and the paper is really just a description of the method and a demonstration that accurate expression patterns can be recovered from individual cryosectioned embryos. The resolution here is not that great – we used 6 slices of ~60um each per embryo. But we’ve started to make smaller sections, and a back of the envelope calculation suggests we can, with available sample handling and sequencing techniques, make up to 100 slices per embryo. This would be more than enough to see stripes and other subtle patterns missed in the current dataset.

Our immediate near term goals are to do a developmental time course, compare patterns in male and female embryos, look at other species and examine embryos from strains carrying various patterning defects. For those of you going to the fly meeting in DC in April, Peter’s talk will, I hope, have some of this new data.

Anyway, we would love comments on either the method or the manuscript.

 

Posted in EisenLab, gene regulation, My lab, open access, science | Comments closed

For patents, against open access: The sad state of university leadership

Quick. Name a leader of a major research university who has taken a courageous stand on any important issue in the last decade. I know they’re out there. They must be. But I can’t think of one.

Instead, I’m left dumfounded reading this amicus brief filed in a case – Bowman v. Monsanto – about to be heard by the US Supreme Court.

The case, which pits a farmer who planted soybeans containing Monsanto’s “Roundup Ready” technology without paying their license fees, boils down to a question of how much control patent holders have in their invention after it has been sold.

I am very interested in the issues in this case – I strongly support the development and use of geneticly modified crops, but also believe that our patent laws are completely out of whack. So a line in the NYT article on the case that universities had filed a brief on behalf of Monsanto caught my eye – all the more so because by own University of California had signed on.

The basic arguments put forth by the universities is that ruling in favor of the farmer would “greatly diminish, and add uncertainty to, the value of patents covering artificial, progenetive technologies” and would “devalue the extensive benefits achieved by the Bayh-Dole Act”.

Why are most of the most prominent state universities in the US arguing in front of the Supreme Court in favor of stronger patent laws? Why do they have any interest in who wins the case? The answer is that universities have become major producers and wielders of intellectual property – profiting, in many cases extensively, from patents taken out on inventions made by their faculty.

I have made no secret of my utter disdain for this process. We would all be better off if there were no patents on inventions produced at state universities and/or by publicly funded scientists. Universities don’t support strengthening patent laws because they believe it’s the right thing to do in some abstract sense, they support strengthening patent laws because it makes them money. And thus university administrators – when faced with a choice between the public good and their balance sheet – choose the money.

Meanwhile, as their lawyers were off siding with major corporations against a small-time farmer, universities have chosen to be completely silent on another major issue pitting corporate greed against the public good: providing free access to papers describing the results of publicly funded research.

A bill was introduced in Congress that would require scientists receiving money from the federal government to make copies of their published work available to the public. While many people from universities across the country have spoken up in favor of this bill and its predecessors, the University of California has never voiced its support for this action, and virtually all other universities have been equally silent.

In failing to support this legislation, universities are not just being passive bystanders. They are a major player in this issue, and their silence is widely interpreted as ambivalence or outright opposition, and helped to ensure that previous versions of this bill never made it out of committee.

So we have major public universities in America that see fit to use their resources to defend stronger patent laws, but choose to let legislation that would provide free access to knowledge to the public. There is only one word to describe this: pathetic.

 

 

Posted in GMO, intellectual property, open access, politics | Comments closed

The Association of American Publishers are a bunch of complete and total fu*kheads

It didn’t take long following the introduction of the Fair Access to Science and Technology Research Act of 2013 (FASTR) for Dr. Evil The Association of American Publishers to respond.

As if trying to outdo themselves, this latest anti-open access screed contains more misleading statements and outright lies than their previous efforts to undermine public access legislation.

Here is the text with my comments in red.

AAP STATEMENT ON FASTR ACT

Thursday, 14 February 2013 | Andi Sporkin

“Different Name, Same Boondoggle”

A boondoggle is, according to Wikipedia, a “project that is considered a useless waste of both time and money, yet is often continued due to extraneous policy motivations.” If there is any aspect of science today that should be considered a boondoggle, it is the existence of subscription-based publishers, who steal receive billions of dollars in public money, much of which they pocket as profit, while failing to provide access to the material they publish to the taxpayers who funded the work and to many of scientists, students and teachers worldwide.

Washington, DC; February 14, 2013 — Calling it “different name, same boondoggle,” the Association of American Publishers said today that the Fair Access to Science and Technology Research (FASTR) Act is unnecessary and a waste of federal resources.

The bill revives the majority of the terms set out in the Federal Research Public Access Act (FRPAA), which was introduced without further action in each of the last three Congresses. It would require federal agencies to undertake extensive, open-ended work already being performed successfully by the private sector.

This is a boldface lie. First, the purpose of the bill is explicitly to provide access to federally funded research to all Americans. This is something that publishers are, inarguably, NOT doing today. If the private sector were actually providing this service, then the bill would be superfluous. 

It would add significant, unspecified, ongoing costs to those agencies’ budgets in the midst of ongoing federal deficit reduction efforts.

Again, this is completely laughable. The publishing industry is, by far, the biggest waste of money in  science spending today. If we eliminated them entirely, we could save billions of dollars a year, while providing unlimited free access to the results of research. If publishers want to save taxpayers money, they should go out of business. 

Finally, it would undermine publishers’ efforts to provide access to high-quality peer-review research publications in a sustainable way, while ignoring progress made by agencies collaborating with publishers to improve funding transparency.

The science publishing industry – with skyrocketing costs and ever decreasing services – is the textbook example of an unsustainable business. Ask any library across the country to tell you about the efforts of publishers to create a sustainable publishing model, and they will laugh heartily at this claim. And I have no idea what they’re talking about as far as funding transparency goes. 

“This bill would waste so much taxpayers’ money at a time of budgetary crisis, squander federal employees’ time with busywork and require the creation and maintenance of otherwise-unneeded technology,” said Allan Adler, General Counsel and Vice President, Government Affairs, AAP, “all the while ignoring the fact that its demands are already being performed successfully by the private sector.”

Sorry, Allan Adler, but you are completely full of shit. The publishing industry has had nearly two decades to respond to the opportunity created by the internet and make the scientific literature freely available to the public. They have failed. The waste of taxpayer money is continuing to funnel billions of dollars to these ungrateful and almost completely useless businesses. And, I almost fell out of my chair laughing when I read the thing about “busywork”. Anybody who has spent their time interacting with scientific journals where know why. 

AAP also noted:

The bill ignores crucial distinctions among federal agencies and scientific disciplines and would attempt to shoehorn every group into a one-size-fits-all mandate on publication methods and embargo periods.

It is not a one-size-fits-all mandate. It’s a simple statement. The taxpayers funded it, they get to read it. If the AAP has a better model for how to accomplish this, we’re all ears. But if they don’t, they should just shut up.

FASTR disregards what is being accomplished through public-private partnerships and agency collaborations such as the CrossRef “FundRef” pilot to standardize funding source information for scholarly publications

Huh? What does this do to provide access to anyone? NOTHING. 

The bill would require agencies to undertake extensive new duties and reporting requirements while also requiring them to invest in new taxpayer-funded technology resources and systems. FASTR would demand that federal agencies’ staffs develop and implement processes to collect materials, create and permanently maintain redundant digital repositories — resources that are currently in place — and fulfill new government requirements for studies and analyses.

Adler added, “Such systems and protocols are already in place, functioning effectively. Researchers should not be required to duplicate what’s available to them and taxpayers shouldn’t be stuck footing the bill for it too.”

Again, this is a total misdirect. None of the systems in place from publishers accomplish the clear objective of this legislation – to provide the American public with access to the research they fund. If the AAP wants to devote their resources to doing this – great. But to pretend that they are doing it, and then criticizing government efforts to accomplish what the publishers have failed to do, is disgraceful. 

There are many publishers who are members of the AAP who, I suspect, do not agree with their repulsive stance on FASTR. It is time for these groups to speak up and repudiate the AAP’s stance.

Posted in open access, politics | Comments closed

Let’s make 2013 the year of legislative access on open access

Yesterday a bi-partisan group of legislatures – Rep. Doyle (D-PA), Rep. Lofgren (D-CA), Rep. Yoder (R-KS), Sen. Wyden (D-OR) and Sen. Cornyn (R-TX) – introduced legislation that would require federal agencies that fund scientific and medical research to make works they fund available to the public. This bill – known as the Fair Access to Science and Technology Research Act of 2013, or FASTR, is a better version of legislation introduced in previous Congresses.

FASTR shortens the acceptable delay from 12 months to 6 months (still 6 months longer than it should be, but headed in the right direction), and, very importantly, adds a requirement that the works be available for text mining and other forms of reuse. It’s not perfect, but it’s very good, and passage of this bill would be a significant milestone in the push for public access to the results of federally funded research.

Previous versions of this bill have gone nowhere, but this is the time. Supporters of open access in the US should contact their representatives in Washington and urge them to sign on as cosponsors of this bill and push for it to reach the House and Senate floor. And every month we should renew this pressure – I hereby declare the first Friday of every month #FASTRFriday (which we will celebrate today for February). Let’s keep the pressure on Congress and see this one through.

Public access legislation is also being introduced in Illinois, New York and California, and I will post updates when these bills are introduced.

Posted in Uncategorized | Comments closed