Billion Dollar Scam: Why you should play the lottery instead of going to H&R Block

H&R Block is running an aggressive campaign under the rubric “Get Your Billion Back” trying to convince taxpayers to come and have their taxes done by one of their “Tax Professionals”. Their pitch is that “this is how much money is left on the table when people do their own taxes”.

1 Billion Dollars

On their site and in TV ads they provide all sorts of information meant to wow you about how much $1 billion dollar is.

It’s $500 on every seat in every profession football stadium in America!!!

It’s 869,565,217 Bags of Chips!!!

It’s a stack of money that reaches the Van Allen Belts.

It sure sounds like a lot of money. But you know what it really is? It’s bullshit.

The $1 billion they are referring to is “left on the table” by the 56,000,000 Americans who do their own tax returns. The math is simple. That’s an average of less than $20 back per return. Since it costs an average of $198 to have H&R Block do your taxes [1], that is a completely miserable return – a pay out of roughly nine cents for every dollar spent. Or put another way, if everybody who did their own taxes went to H&R Block to have their taxes done, they would spend over $11 billion dollars. That’s 9,641,739,130 bags of chips!!

This is just an insane financial proposition. I don’t know what fraction of people get more than $198 back, but it has to be pretty small. H&R Block say that 1 in 5 people get more money back than they would have if they’d done their taxes themselves. So these people average around $100 back. Even if you’re one of these “lucky” people, you still net negative on average. Plus the upside has to be pretty small – and is capped on the high end by the amount you actually paid in taxes.

Now let’s compare that to everybody’s favorite “bad deal” – the lottery. A typical lottery in the US pays back around 60 cents for every dollar collected (this number varies depending on where you play and which game you play – some states are as low of 50 cents on the dollar, some as high as 70 [2]). But as bad a deal as this is, you are still doing six times better than if you go to H&R Block! Plus, there’s actually the prospect – albeit a small one – of a really big payoff.

If all of those 56,000,000 tax filers used their $198  – or let’s make it $150 so they can buy tax prep software to do it at home – and bought Powerball tickets throughout the year (there are around 100 draws so they’d buy 1 or 2 tickets per draw), that would be around 4.2 billion tickets. With odds of hitting the big jackpot at around 1 in 175 million, that means that 24 of these people would win in jackpot every year – at an average of around $140 million per jackpot (at this point they would actually need to see a tax professional). And another 800 people would win $1,000,000 prizes.

None of this makes playing the lottery a good investment  of course – but it’s a hell of a lot better than falling for H&R Blocks scam that is cynically trying to take advantage of most American’s innumeracy to take billions of dollars off your table.

Posted in math | Tagged , , | Comments closed

Nathanael Johnson lets the anti-GMO movement off the hook

For the last six months, Nathanael Johnson has been writing about GMOs for the lefty environmental magazine Grist. The goal of his ultimately 26 part series was to try and bring some journalistic sanity to a topic that has gotten nasty in recent years. As Grist editor Scott Rosenberg is quoted on Dan Charles’ blog:

GMOs “were a unique problem for us,” says Rosenberg. On the one hand, most of Grist’s readers and supporters despise GMOs, seeing them as a tool of corporate agribusiness and chemical-dependent farming.

On the other hand, says Rosenberg, he’d been struck by the passion of people who defended this technology, especially scientists. It convinced him that the issue deserved a fresh look.

I’ve enjoyed reading the series. Johnson has investigated a wide range of issues related to GMOs with a generally empirical eye – trying to find data to help answer questions, while avoiding the polemicism that dominates discussions of the topic. Although I don’t think everything he has written is right, the series is a very useful starting point for people trying to wrap the heads around what can be a complex topic. He has clearly tried to delve deeply into every topic, and to not let dogma or propaganda from either side affect his conclusions.

Unfortunately, if the series has had an effect on what I presume is its target audience – the anti-GMO readers of Grist – it hasn’t shown up in online debates about GMOs. When I and others have pointed to Johnson’s series in response to outrageous statements from anti-GMO campaigners, he is dismissed as either a naive fool or just another Monsanto tool.

So I was surprised to read his concluding piece in the series, “What I learned from six months of GMO research: None of it matters“.

It’s a little awkward to admit this, after devoting so much time to this project, but I think Beth was right. The most astonishing thing about the vicious public brawl over GMOs is that the stakes are so low.

His basic point is that a lot of hot air and political energy is spent trying to decide between two alternative futures that aren’t all that different.

In the GMO-free future, farming still looks pretty much the same. Without insect-resistant crops, farmers spray more broad-spectrum insecticides, which do some collateral damage to surrounding food webs. Without herbicide-resistant crops, farmers spray less glyphosate, which slows the spread of glyphosate-resistant weeds and perhaps leads to healthier soil biota. Farmers also till their fields more often, which kills soil biota, and releases a lot more greenhouse gases. The banning of GMOs hasn’t led to a transformation of agriculture because GM seed was never a linchpin supporting the conventional food system: Farmers could always do fine without it. Eaters no longer worry about the small potential threat of GMO health hazards, but they are subject to new risks: GMOs were neither the first, nor have they been the last, agricultural innovation, and each of these technologies comes with its own potential hazards. Plant scientists will have increased their use of mutagenesis and epigenetic manipulation, perhaps. We no longer have biotech patents, but we still have traditional seed-breeding patents. Life goes on.

In the other alternate future, where the pro-GMO side wins, we see less insecticide, more herbicide, and less tillage. In this world, with regulations lifted, a surge of small business and garage-biotechnologists got to work on creative solutions for the problems of agriculture. Perhaps these tinkerers would come up with some fresh ideas to usher out the era of petroleum-dependent food. But the odds are low, I think, that any of their inventions would prove transformative. Genetic engineering is just one tool in the tinkerer’s belt. Newer tools are already available, and scientists continue to make breakthroughs with traditional breeding. So in this future, a few more genetically engineered plants and animals get their chance to compete. Some make the world a little better, while others cause unexpected problems. But the science has moved beyond basic genetic engineering, and most of the risks and benefits of progress are coming from other technologies. Life goes on.

In many ways he’s right. GMOs on the market today – and most of the ones planned – are about making agriculture more efficient and profitable for farmers and seed providers. This is not a trivial thing, but would global agriculture collapse without these GMOs? Of course not.

But Johnson makes several key assumption in arguing that the stakes are low.

First, he says that “the odds are low, I think, that any of their inventions [GMOs] would prove transformative”. The obvious response is “How do you know?”. We rarely see transformative technologies coming. And remember that we are still in the very early days of genetic engineering of crops and animals. I suspect that you could go back and look at the early days of almost any new technology and convincingly downplay its transformative potential. That is not to say that genetic modification will definitely transform agriculture in a good way. Most new technologies ultimately fail to deliver. But the proper stance to take is to say that we just don’t know. What we do know is that there are many pressing and complex problems facing the future of agriculture. And, given that there is no compelling reason not to allow GM techniques to proceed, why take this tool out of the hands of scientists?

Second, Johnson cites “newer tools” are coming along that will render GMOs in the way we view them today somewhat less important. It’s not clear what these tools are – but I’ll assume that they are genome editing and things like marker assisted breeding – both tools that allow for highly efficient creation or selection of traits without crossing the dreaded “species barrier”. But given the vitriolic opposition to GMOs that exists today, does Johnson think these new technologies are going to get a free pass? After all, these tools are being wielded by the companies (Monsanto, Syngenta, etc…) who anti-GMO campaigners see as the root of all evil. Does anyone really think that the future of these technologies is not linked to how the debate of todays GMOs gets resolved?

And this, to me, if the big issue. Yes, as Johnson argues, the fate of the world does not rest on whether or not farmers can grow and sell glyphosate resistant soybeans. And it is also probably true that the world will neither be destroyed nor saved by transferring traits from one species to another. But that is not the right question to be asking.

Johnson tries to frame this question as a question about the role of technology:

People care about GMOs because they symbolize corporate control of the food system, or unsustainable agriculture, or the basic unhealthiness of our modern diet. On the other side, people care about GMOs because they symbolize the victory of human ingenuity over hunger and suffering, or the triumph of market forces, or the wonder of science. These larger stories are so compelling that they often obscure the ground truth.

But that isn’t it either. What is infuriating about the anti-GMO movement to me – and I suspect most other scientists – is not that people are disputing the wonder of science. And it’s not that people are somehow rejecting technology – because they’re not (the same people who hate GMOs are happy to tweet about it from their iPhones while using satellite wifi on a 787). Or that they’re attacking corporations, industrial agriculture or the free market economy. No. That’s not it.

What is most disturbing about the GMO debate – and why it matters – is that the anti-GMO movement at almost every turn rejects empiricism as a means of understanding the world and making decisions about it. The reason GMO opponents have largely rejected Johnson and his series is not solely because they disagree with his conclusion that GMOs are not an existential threat to our existence – but because they reject his methods. They do not appear to believe that the kind of questions that Johnson asks – “Does insect resistant corn reduce the amount of insecticide used on farms?” – can even be asked. They already know the answer, and are completely unmoved by evidence.

The anti-GMO movement is an anti-empirical movement. It relies on the rejection of evidence about the risks and benefits of extant GMOs. And it relies on the rejection of an understanding about molecular biology. And it’s triumph would be a disaster not just because we would miss out on future innovations in agriculture – but because the rejection of GMOs would all but banish the last vestige of empiricism from political life. The world faces so many challenges now, and we can only solve them if we believe that the world can be understood by studying it, that we can think up and generate possible solutions to the challenges we face, and that we can make rational decisions about which ones to use or not to use. The anti-GMO movement rejects each piece of this – it rejects decades of research aimed at understanding molecular biology, it rejects technology as a way to solve problems and more than anything it rejects our ability to make rational assessments of risk and value.

So when Johnson – who has spend considerable time and energy defending the role of empiricism in the GMO debate – throws up his hands and the end and says “Meh – none of this really matters” – he is letting opponents of GMOs off the hook. He is giving them permission to continue demanding that voters and politicians reject reason and evidence and ban a technology based on ill-founded fears and bad evidence – to continue thinking that they are saving the planet while, in reality, they are bringing us closer to its destruction.

Posted in GMO | Comments closed

Accepting nominations for the “Pressies” recognizing the most overhyped science press releases of 2013

Scientists get all sorts of prizes this time of year. Some win a Lasker. Others a Nobel or a Breakthrough Prize. The really lucky get a commemorative mug from PNAS.

But the most important members of the scientific community get no recognition. I’m not talking about the graduate students and postdocs who actually do the work. No. I’m talking about the creative geniuses at university press offices who toil every week to turn the soon-to-be-published papers of their researchers – no matter how pedestrian or replicative – into heartbreaking works of staggering science.

To show our appreciation for everything they do, we have decided to create a new prize just for them – which will henceforth be known as “The Pressies”, and are now accepting nominations in the following categories:

  • Most Overhyped Science Story of the Year
  • Most Egregious Failure to Cite Earlier Work
  • Most Creative Use of the Term “Junk DNA” to Overhype a New Paper about non-coding DNA
  • Lifetime Achievement Award

Please leave your nominations in the comments. Include a link to the press release and a few sentences describing why you think it deserves a 2013 Pressy. Finalists will be announced in two weeks followed by one week of open voting. Awardees will be announced in January.

We haven’t decided what the winners will get, but our press office assures us that this year’s recipients will get the most important prize in the history of prizes – the first time anyone has ever received a prize like this. Henceforth the field of prizes will never be the same.

Posted in science | Comments closed

Beall’s Litter

Jeffrey Beall, a librarian at the University of Colorado Denver, has come to some fame in science publication circles for highlighting the growing number of “predatory” open access publishers and curating a list of them. His work has provided a useful service to people seeking to navigate the sometimes confusing array of new journals – many legitimate, many scammers – that have popped up in the last few years.

Unfortunately, as he has gained some degree of notoriety, it turns out he isn’t just trying to identify bad open access publishers – he is actively trying to discredit open access publishing in general. There were signs of this before, but any lingering doubt that Beall is a credible contributor to the discourse on science publishing was erased with an article he published last week. The piece is so ill-informed and angry that I can’t really describe it. So I’m just going to reproduce his article here (it was, ironically, published in an open access journal with a Creative Commons license allowing me to do so), along with my comments


The Open-Access Movement is Not Really about Open Access

Jeffrey Beall

Auraria Library, University of Colorado Denver, Denver, Colorado, USA, jeffrey.beall@ucdenver.edu, http://scholarlyoa.com

Abstract

While the open-access (OA) movement purports to be about making scholarly content open-access, its true motives are much different. The OA movement is an anti-corporatist movement that wants to deny the freedom of the press to companies it disagrees with.

It is rather amusing to hear open access described as “anti-corporatist” seeing as the primary push for open access has come from corporations such as PLOS  and BioMed Central, a for profit company recently purchased by one of the world’s largest publishing houses. 

The movement is also actively imposing onerous mandates on researchers, mandates that restrict individual freedom. To boost the open-access movement, its leaders sacrifice the academic futures of young scholars and those from developing countries, pressuring them to publish in lower-quality open-access journals. The open-access movement has fostered the creation of numerous predatory publishers and standalone journals, increasing the amount of research misconduct in scholarly publications and the amount of pseudo-science that is published as if it were authentic science.

Introduction

If you ask most open-access (OA) advocates about scholarly publishing, they will tell you that we are in a crisis situation. Greedy publishers have ruined scholarly communication, they’ll claim, placing work they obtained for free behind expensive paywalls, locking up research that the world needs to progress.

Yes. We will say that. Because it is completely, and unambiguously true.

The OA zealots will explain how publishers exploit scholars, profiting from the research, manuscripts, and peer review that they provide for free to the publishers, who then turn around and sell this research back to academic libraries in the form of journal subscriptions.

Again. Completely true.

They will also tell you that Elsevier, the worst of the worst among publishers, actually created bogus journals to help promote a large pharmaceutical company’s products. Imagine the horror. Because of this, we can never trust a subscription publisher again. Ever.

Elsevier did do this. But this has never been part of the argument for open access. 

Moreover, the advent of the Internet means that we really don’t need publishers anymore anyway. We can self-publish our work or do it cooperatively. Libraries can be the new publishers. All we have to do is upload our research to the Internet and our research will be published, and the big publishers will wither up and die freeing up academic library budgets and creating a just and perfect system of scholarly publishing.

Yup. That’s pretty much it. Of course it’s not that simple. Nobody thinks this new system will just happen organically. I and many others have proposed systems to fund publishing and manage peer review without subscription-based journals. 

The story those promoting OA tell is simple and convincing. Unfortunately, the story is incomplete, negligent, and full of bunk. I’m an academic crime fighter (Bohannon 2013b). I am here to set the record straight.

Phew. I’m glad someone’s on the case. 

The logic behind the open-access movement is so obvious, simple, and convincing that no one could disagree with it, except that OA advocates don’t tell the whole story. Open access will grant free access to research to everyone, including research-starved people in the Global South who have never read a scholarly article before. How could anyone oppose that? It will also allow everyone who has ever had the frustration of hitting a paywall when seeking a research article to access virtually everything for free, or so they claim.

What the Open-Access Movement is Really About

The open-access movement is really about anti-corporatism. OA advocates want to make collective everything and eliminate private business, except for small businesses owned by the disadvantaged.

I don’t even know what to say about this. Forget about the self-delusion that leads Beall to think he can intuit what my and other OA advocates intentions have been. It’s just a factually ludicrous statement. The OA movement was born, and continues to be driven, by corporations – most of them for profit corporations – who are seeking to build businesses that better serve their customers. Does Beall think Google is anti-corporatist and anti-profit because they are trying to drive small newspapers out of business?  

They don’t like the idea of profit, even though many have a large portfolio of mutual funds in their retirement accounts that invest in for-profit companies.

So not only are we anti-corporatist, we’re bad investors too? 

Salaries of academics in the United States have increased dramatically in the past two decades, especially among top professors and university administrators. OA advocates don’t have a problem with this, and from their high-salaried comfortable positions they demand that for-profit, scholarly journal publishers not be involved in scholarly publishing and devise ways (such as green open-access) to defeat and eliminate them.

No. I and other open access proponents see a publishing system that is expensive, slow and ineffective and that needlessly denies access to countless people in the US and elsewhere who would benefit directly and indirectly from access to the scholarly literature. Yes, we oppose publishers who employ the outdated subscription model. But not because they are corporations. It’s because what they are doing is bad for science and bad for the public. Disagree with that assessment if you will, but please spare me the anti-corporatist garbage. 

The open-access movement is a negative movement rather than a positive one. It is more a movement against something than it is a movement for something. Some will respond that the movement is not against anything; it is just for open access. But a close analysis of the discourse of the OA advocates reveals that the real goal of the open access movement is to kill off the for-profit publishers and make scholarly publishing a cooperative and socialistic enterprise. It’s a negative movement.

From day 1, open access has been about a very specific alternative to the existing subscription model. Yes, by definition every effort to replace one business model with another will always have a negative aspect to it. But to deny that there is a positive aspect to OA is silly. What is PLOS? What is BMC? 

This kind of movement, a movement to replace a free market with an artificial and highly regulated one, rarely succeeds.

The current publishing system a free market? How can Beall, a librarian for 23 years, say this with a straight face? There is no free market. Today scientists are all buy compelled – by both real and imagined expectations of hiring, funding and promotion committees – to publish their work in a small number of elite journals. These journals then effectively have a monopoly on proving access to content which scientists need to do their work. And they use their monopoly power to sell back this content to universities and other research institutions and massively inflated prices. There is little choice on the part of researchers to not participate in the system. And little choice on the part of institutions to opt out of subscriptions. This is not a free market that anyone who actually understands or cares about free markets would recognize. 

In fact, the gold open-access model actually incentivizes corruption, which speed the path to failure. The traditional publishing model, where publishers lived or died on subscriptions, encouraged quality and innovation. Publishers always had to keep their subscribers happy or they would cancel.

Really? Quality and innovation? Twenty years since the birth of the modern internet scholarly journals basically publish electronic versions of their old print journals that are nearly identical in format, layout and content to their pre-internet editions. And this stasis actually represents progress compared to what has happened with article submission. It used to be easy to submit a paper to a journal. You printed it out and put it in the mail. Now it takes hours to go through web portals that are more complicated – and less efficient – than healtcare.gov. 

Indeed, scholarly publishing is one of the least innovative industries on the planet. And why? Precisely because they have absolutely no incentive to innovate because there is not a free market in subscriptions. Indeed, the structure of the industry actively discourages innovation because the people who make the important decisions about where to publish their articles – researchers – are not the people who pay the bills for journals. I have watched over a decade of efforts on the part of the University of California libraries to cut costs by canceling subscriptions, and not once has published innovation every come up in discussions. Why? Because authors don’t give a hoot about innovation – they care about getting their work in the most high-profile journal, and that’s it. 

Similarly, a movement that tries to force out an existing technology and replace it with a purportedly better one also never succeeds. Take the Semantic Web for example. It has many zealous advocates, and they have been promoting it for many years. Some refer to the Semantic Web as Web 3.0. However, despite intense promotion, it has never taken off. In fact, it is moribund. The advocates who promoted it spent a lot of time and blog space cheerleading for it, and they spent a lot of time trashing technologies and standards it was supposed to replace. In fact, that was what they did the most, badmouthing existing technologies and those who supported and used them. One example was a library standard called the MARC format. This standard was ridiculed so much it’s a wonder it still even exists, yet is still being used successfully by libraries world- wide, and the semantic web is dying a slow death. Open access publishing is the “Semantic Web” of scholarly communication.

What a load of nonsense. Yes. The semantic web failed. But if movements to replace existing technology with better ones never succeeded I would be chiseling this blog post out on cave walls. 

The open access movement and scholarly open-access publishing itself are about increasing managerialism (Grayson 2013). Wherever there is managerialism, there is an increased use of onerous management tactics, including mandatory record keeping, rationing of resources, difficult approval processes for things that ought to be freely allowed, and endless committee meetings, practices that generally lead to cronyism.

Had to look managerialism up, and I still don’t understand what he’s talking about. It seems like, again, Beall is operating under the patently false notion that scholarly research and scholarly publishing are some kind of idea free market. In reality we already operate under very strict controls tied to our funding (he should see the paperwork tied to NIH grants), strict rationing of resources and difficult approval processes for things that out to be freely allowed (e.g. reading papers) as well as endless committee meetings. But I fail to see what this has to do with publishing. And does Beall really think the current journal system is free of cronyism??? Wake up man. Scholarly journals are amongst the clubbiest institutions on the planet. 

The traditional publishing model had the advantage of there being no monetary transactions between scholarly authors and their publishers. Money, a source of corruption, was absent from the author-publisher relationship (except in the rare case of reasonable page charges levied on authors publishing with non-profit learned societies) in the traditional publishing model.

If you think that systems in which one group of people make the key decisions about what to buy and another group pays the bills are the perfect way to structure an economic system, I suggest you study military purchasing systems where generals decide what they want to buy and Congress just writes a check. That works out really well. Or maybe I should let my kids decide what kind of things we should by at the grocery or toy stores without a budget. THIS is what the economics of scholarly publishing are like today. The system is utterly and completely corrupt in that authors make a transaction with a journal in which they get something valuable – a citation – knowing that someone else if going to pay the bills. What on Earth do you call a system in which a small group of people receive something of great value that they make taxpayers pay for besides corrupt?

And, the “rare case of reasonable page charges levied on authors publishing with non-profit learned societies” is just ignorant. Page charges for publishing in subscription based journals are neither rare nor reasonable. Indeed the page charges levied by many journals – especially top tier and society journals – exceed the costs of publishing in open access journals. 

Managerialism is the friend of those who want to restrict freedom and advancement. It is a tool for creating malevolent bureaucracies and academic cronyism. Managerialism is the logical and malevolent extension of office politics, and it will hurt scholarly communication. Many universities subsidize or pay completely for their faculty members’ article processing charges when they submit to gold (author pays) open-access journals. The management of the funds used to pay these charges will further corrupt higher education. The powerful will have first priority for the money; the weak may remain unfunded. Popular ideas will receive funding; new and unpopular ideas, regardless of their merit, will remain unfunded. By adding a financial component to the front end of the scholarly publishing process, the open-access movement will ultimately corrupt scholarly publishing and hurt the communication and sharing of novel knowledge.

Again, what world is Beall living in where unpopular ideas are littered with funding and have journals lining up to publish them? The system we have today in which journals compete based on their “impact factor” all but ensures that unpopular ideas are relegated to the most obscure corners of the publishing world. One of the long-term advantages of reforming scholarly publishing is that it will – by removing the monopolistic control publishers have today –  make publishing less expensive and accessible. Do we need to be careful that we don’t create a new system where only the powerful can publish their work? Yes. But to argue that the current system isn’t already plagued by this problem is ludicrous. 


The open-access movement was born of political correctness, the dogma that unites and drives higher education.

I have been called many things in my life. But “politically correct” is not one of them. 

The open-access advocates have cleverly used and exploited political correctness in the academy to work towards achieving their goals and towards manipulating their colleagues into becoming open-access advocates. One of the ways they’ve achieved this is through the enactment of open-access mandates. The strategy involves making very simple arguments to faculty senates at various universities in favour of open- access and then asking the faculties to establish the mandates. These mandates usually require that faculty use either the gold or green models of open-access publishing. OA advocates use specious arguments to lobby for mandates, focusing only on the supposed economic benefits of open access and ignoring the value additions provided by professional publishers. The arguments imply that publishers are not really needed; all researchers need to do is upload their work, an action that constitutes publishing, and that this act results in a product that is somehow similar to the products that professional publishers produce.

This is just a complete mischaracterization of open access mandates and the discussions around them. Indeed virtually all open access mandates enacted to date have been explicitly structured – much to my chagrin – so as not to threaten subscription based publishers. Virtually all of them contain embargo periods, typically of a year, before works are made freely available. Most contain opt out provisions for scholars who want to publish in journals that are incompatible with the policy. And none contain any kind of enforcement mechanism or penalties. 

Nothing could be further from the truth, and the existence of the predatory publishers, the focus of my research, is evidence of this. It’s likely that hundreds or even thousands of honest researchers have fallen prey to the predatory publishers, those open-access publishers that exploit the gold open-access model just for their own profit, pretending to be legitimate publishing operations but actually accepting any and all submissions just for the money. Institutional mandates feed into and help sustain predatory publishers.

These journals are terrible and need to be eliminated. And Beall’s efforts to catalog them are an important part of this. But, while there are many such journals, they constitute a small fraction of published papers. And by focusing exclusively on scammy OA publishers, Beall ignores the far bigger problem of the many subscription journals (usually run by big for-profit publishers) that also publish more or less anything submitted to them in the name of driving up their volumes and justifying increased subscription fees. If you are going to blame unscrupulous OA publishers on institutional mandates, then you have to also blame the broader “publish or perish” culture for bottom-feeding subscription journals.

Thus there are conscientious scholars, trying to follow the freedom-denying mandates imposed on them by their faculty representatives, who get tricked into submitting their good work to bogus journals.

OR, you have conscientious scholars who believe that publishing in open access journals is the right thing and have been tricked into submitting to bogus journals. 

Again, I think these journals suck. I agree with Beall that we need to expose and eliminate them. But this can very easily be done without discarding open access publishing. 

There are numerous open-access advocates who promote scholarly open-access publishing without warning of the numerous scam publishers that operate all around the world. I find this promotion negligent. Anyone touting the benefits of open-access and encouraging its adoption ought also to warn of the numerous and increasing scams that exist in the scholarly publishing industry.

I agree with this. This is why PLOS and many other legitimate OA publishers formed the Open Access Scholarly Publishers Association to establish a code of conduct for OA publishers, and to create effective procedures to certify that publishers adhere to these standards. 

I believe many OA advocates ignore the known problems with scholarly open-access publishing because they don’t want to frighten people away from it. This is the moral equivalent of selling someone a used car with the knowledge the engine block is cracked, without informing the buyer.

That’s a ridiculous metaphor. It’s not like selling a used car with a hidden defect. It’s more like encouraging people to invest their money without warning them about Nigerian banking scams. But I agree that we should all make people aware that there are problematic publishers and how people can recognize them. 

Most descriptions and explanations of open-access publishing are idealistic and unrealistic. They tout the benefits but ignore the weaknesses. Many honest scholars have been seriously victimized by predatory publishers, and as a community we must help others, especially emerging researchers, avoid becoming victims. Pushing open access without warning of the possible scams is not helpful. In fact, it can be downright damaging to a scholar’s career. For example, once a researcher unwittingly submits a paper to a predatory publisher, it is usually quickly published. Sometimes this fast publishing is the researcher’s first clue that something is amiss. But by then it’s too late, as once a paper is published in a predatory journal, no legitimate journal will be interested in publishing it. When this happens to early career researchers, it can have long-term negative effects on their careers.

Again, this is throwing the baby out with the bathwater. Yes, this is a problem, but it’s a small, and easily fixable one. Saying we should discard OA publishing because of these bad actors is like saying we should abandon Obamacare because some insurers have tried to exploit it in dishonest ways. 

I have observed that the advocates promoting open access do not want to hear any criticisms of the movement of the open-access publishing models, and they quickly attack anyone who questions the open-access or highlights its weaknesses. Open-access advocates are polemics; they have an “us versus them” mentality and see traditional publishers as the bad guys.

I have always answered questions about PLOS and OA publishing honestly, and have spoken out repeatedly about what I see are its weaknesses and where it has not achieved its potential. However, I am also quick to point out the far greater weaknesses in the current system, and the often erroneous statements made against OA publishing. 


In April 2008 [sic – it was 2013], an article about predatory publishers appeared in the New York Times (Kolata 2013). The article described predatory publishers and predatory conferences. Immediately upon publication of the article, OA advocates sprang into action, questioning the article and its reporting. Numerous blog posts appeared, many attempting to cast doubt on the arti- cle. One perhaps slightly paranoid blog post was entitled “Did Commercial Journals Use the NYT to Smear Open Access?” (Bollier 2013). The fact is the predatory publishers do cast a negative light on all of scholarly open-access publishing.

I do not agree with this at all. These publishers cast a negative light on those publishers. Most researchers know who the legitimate OA publishers are, and I have seen no evidence that the existence of these scam publishers has hurt PLOS’s reputation at all. In fact, it seems like it has had the opposite effect, with researchers gaining an appreciation for the degree of rigor PLOS puts into its review system. 

I notice that Beall isn’t arguing that the existence of scam conferences casts a negative light on all scholarly conferences. Why is this? They use the same business model. It’s sometimes hard to tell which ones are good and which are bad? Is it perhaps because the logical connection he’s trying to draw between bad OA journals and all OA journals is bad.

The gold open-access model in particular is flawed; there are only a few publishers that employ the model ethically, and many of these are cutting corners and lowering their standards because they don’t have to fear losing subscribers.

It would be helpful if he were specific about who he thinks is being unethical and who is cutting corners. 


On October 4, 2013, Science magazine published an article by John Bohannon (2013b) that related what the author learned from a sting operation he conducted on open-access publishers. The sting operation, which used my list of predatory publishers and the Directory of Open Access Journals as sources of journals, found that many journals accepted papers without even doing a peer review, and many did a peer review and accepted the unscientific article Bohannon baited them with anyway.

Here again, the open-access advocates came out swinging, breaking into their “us versus them” stance, and attacking Bohannon, some- times personally, for not including subscription journals in his study. Subscription journals were not part of his research question, however, but that didn’t stop the many strident critics of Bohannon’s work, who acted almost instinctively according to their Manichaean view of traditional and open-access publishing. He didn’t need to gather data about traditional publishers; that wasn’t what he was studying. If you are counting cars, you don’t need to count airplanes as a control. Also, OA advocates often brag about the continually-increasing number of open-access outlets, predicting that traditional publishers will soon be eclipsed. So if the traditional publishers are nearly extinct, why bother to study them?

The attack on Bohannon was carried out with a near religious fervour. OA advocates will do anything to protect the image of open-access. They don’t care that the number of predatory publishers is grow- ing at a near-relativistic speed; all they care about is that public perception of scholarly open access be kept positive. Bohannon was interviewed by The Scholarly Kitchen contributor Phil Davis on November 12, 2013. Summarizing the reaction of the open-access advocate community to his sting, Bohannon said, “I learned that I have been too naive and idealistic about scientists. I as- sumed that the results [of my study] would speak for themselves. There would be disagree- ments about how best to interpret them, and what to do about them, but it would be a civil discussion and then a concerted, rational, community effort to address the problems that the results reveal. But that is far from what happened. Instead, it was 100% political and many scientists that I respected turned out to be the most cynical political operators of all” (Bohannon 2013a).

Interpreting the reaction to Bohannon’s sting article publisher Kent Anderson, the president of the Society for Scholarly Publishing and former chief editor of the blog The Scholarly Kitchen commented, “… don’t expect rational, calm, reasoned assessments from the likes of Eisen, Solomon, or others [open access advocates]. They’ve demonstrated they are ideologues that are quite willing to attack anyone who they view as falling outside their particular view of OA orthodoxy. How they are able to continue to deny what is actually happening is beyond me” (Anderson 2013).

I won’t speak for others, but since Beall calls me out by name, I would like to point out that on my blog and in a forum sponsored by Science, I accepted the results of Bohannon’s story and said repeatedly that these journals are a problem. However, Beall and Bohannon’s efforts to paint his article as an innocent exploration of a problem in publishing are absurd. I won’t rehash the whole debate here. But go back and look at the press release and the things Bohannon and others wrote after the article appeared – they were clearly spinning the article in order to get in wider attention. And, of course, OA advocates responded in kind.  

When he served as the chief editor of The Scholarly Kitchen blog, Anderson was a frequent target of criticism from open-access zealots. I think this analysis from him sums up the attitude and actions of open access advocates quite well: “The attacks we’ve received when we’ve talked about OA have been surprisingly vitriolic and immature, even when we’ve said some things that were intended to point out issues the OA community might want to think about, in a helpful way. Some people really have a hair-trigger about anything short of complete OA cheerleading” (Anderson 2012).

Anyone who follows Anderson and The Scholarly Kitchen know that he is on a years-long crusade to discredit open access publishing. I don’t know anyone who takes him seriously anymore. Yes, his posts inspire heated responses. That’s because he is a classic internet troll whose posts – with a selective use of facts that would make Fox News proud, and consistent questioning of the wisdom and intentions of open access proponents – are crafted to piss people off. And like more trolls, he succeeds in eliciting the kind of antagonistic comments on which he seems to get off. It’s too bad, because amidst the anti open-access rhetoric, Anderson can be coherent, sometimes makes good points, and has an interesting perspective on publishing. 

One of the arguments that OA advocates use is that a lot of research is publically funded; therefore, the public deserves access to the research for free. This argument is true more in Europe more so than in the United States because collectivism is more institutionalized there. However, there are a lot of things that are publically funded that are not free, both in Europe and North America. Public transportation is one example. If OA advocates stuck to their principles, they would also be demanding that all publically owned buses and trains are free to all users. Their argument also completely ignores all the ways that publishers add value to information. This is done by selecting the best research for publication, managing the peer review process, managing ethics, maintaining servers, digital preservation, and the like. There are plenty of government-funded things that are not free, especially things to which the private sector adds value.

Beall is being willfully disingenuous here. His main critique about open access publishing is that the direct exchange of money between scientists and publishers corrupts the process. But then he accuses open access advocates of wanting publishing to be free. What does he think that OA publishing fees are for?

From the very beginning I and most other OA advocates have explicitly pointed out that publishing has costs, and that those costs need to be covered by the research community. The goal of OA publishing is not to deny the costs, but rather to pay for them in a different way. Science funders can pay a fee for access (as is currently done) , they can pay a fee to publish (as PLOS and other OA publishers do), or they could just subsidize the whole thing with no transaction cost (as eLife does – this the model I ultimately favor). 

For what it’s worth, I do think buses and trains should be free for all users. This would clearly accomplish an important public good – reducing the use of cars – whose economic and non-economic benefits would far far outweigh the costs (see [1][2][3][4]).

It is particularly ironic that Beall – a Librarian – rails so much against government subsidies, since his entire profession is based on the idea that governments should completely subsidize the costs of access to information. Does he think you should pay a fee every time you check out a book? Or ask a librarian a question? Maybe he does – but it’s awfully convenient that he ignores this example, since Beall would almost certainly be out of a job if the state of Colorado applied his logic to their library system. 

Building on this idea, I do find that the open-access movement is a Euro-dominant one, a neo-colonial attempt to cast scholarly communication policy according to the aspirations of a cliquish minority of European collectivists. Early funding for the open-access movement, specifically the Budapest Open Access Initiative, came from George Soros, known for his extreme left-wing views and the financing of their enactment as laws (Poynder 2002).

Is there some corollary of Godwin’s Law in which anyone with a progressive agenda is labeled a Communist in order to discredit them?  

It may be convenient for Beall to discredit the OA movement by labeling it’s advocates as European pinkos. But it’s an ahistorical argument. While pushes for OA came from Europe, in the sciences at least the roots are clearly in the US – starting with arXiv, then eBiomed, PubMed Central, PLOS, the NIH mandate, etc…. I in no way want to diminish the important contributions to OA from the rest of the world, but to label this a European movement is ridiculous. And, having been present at the beginning, I can assure you that collectivist arguments were never the basis for the push for OA – it was always first and foremost about making research work better. 

And while George Soros did provide some early funding for BOAI. The biggest financial boost to OA in its early years came from the Gordon and Betty Moore Foundation. You will all know Gordon Moore as the noted socialist and anti-corporatist who founded Intel. 

Another inconsistency in the open-access movement is that the zealots have been attacking scholarly journal publishers but generally ignoring scholarly monograph publishers, even though they operate using basically the same model, selling proprietary content to libraries. This is evidence that the open-access movement isn’t really about making content open- access; it’s really about shutting down journal publishers. Were it a truly principled movement, it would apply its principals consistently.

The reason that journals have been the main target of OA, is that OA has – until very recently – been almost entirely about the sciences, and there is essentially no history of publishing monographs in the sciences. And, once again, if Beall – who lives off the teat of public subsidy – applied his principles consistently, he would resign his position and set up an entirely fee-for-service library. 

Some tenured open-access advocates are pressuring young scholars away from submitting their work to traditional journals, sacrificing them to the open-access movement. They are pressured to publish in OA journals despite their being able to publish in more esteemed traditional journals, which would better support their tenure cases. This pressuring helps the OA movement because it gets an increased amount of good research published in open- access journals, but it hurts the individuals because it weakens their tenure dossiers. In the open-access movement, the needs of the many outweigh the needs of the few.

OA advocates are also pressuring scientists in developing countries to publish in OA journals, and this could hurt their careers. According to Contreras (2012, 60), “scientists in the developing world wish to publish in prestigious venues, with the greatest likely readership. Artificially forcing them to publish in oa journals of lesser impact could be resented and resisted, as it would be in the industrialized world”. So, OA advocates also want to sacrifice the careers of developing-world scholars so that they can achieve their collectivist goals.

Beall seems to assume that scholars are incapable of making their own decisions. There is a huge difference between trying to convince people to do something and pressuring them to do so. Only someone completely disconnected from the academic community would think that OA advocates are some kind of dominant power able to force people to do our bidding. In fact it is exactly the opposite. The dominant pressure in the system is for people to publish in the highest impact – usually subscription – journals they can. There is almost no effective pressure pushing people to OA journals.

The gold OA model is merely shifting profits from one set of publishers to another, shifting the source of money from library subscriptions to those funding article processing charges, such as the provost’s office, a researcher’s grant itself, or even the library. That is to say, the open-access movement is dealing with the serials crisis by lowering or eliminating the subscription charges that libraries have to pay. But the money to support scholarly publishing has to come from somewhere. For those researchers lucky enough to have grants, they can pay the article processing charges out of grant money, but this means less money that they can spend on actual research. New funding sources are needed for university researchers who don’t have grants. Thus, universities will have to initiate new funds to pay for the article processing charges their faculty incur when they publish in gold open-access journals. The proper distribution of these funds will require new committees and more university bureaucracy. Of course, journals charging APCs will charge more depending on the journal’s status. That is to say, journals with higher impact factors will impose higher prices. The act of instituting financial transactions between scholarly authors and scholarly publishers is corrupting scholarly communication. This was one of the great benefits of the traditional scholarly publishing system – it had no monetary component in the relationship between publishers and their authors. Adding the monetary component has created the problem of predatory publishers and the problem of financing author fees.

I actually mostly agree with Beall here. The APC model has serious problems for researchers without grant funding or from poor institutions, and it’s unreasonable to, in the long run, subsidize the publishing costs for these authors by essentially taxing the fees paid by other authors. It would indeed be a nightmare to have committees set up to decide who will get institutional fees, if that’s the model we ultimately use. I also think the APC model keeps prices artificially high (although far lower than the per article costs paid today). 

There is, of course, tons of money available to support publishing, as the research community spends $10b dollars per year on publishing. If we could magically redirect these costs to support OA publishing we’d be set. But we can’t. There has to be a mechanism by which research funders (most granting agencies and universities) pay into the system in rough proportion to their usage of it. APCs accomplish this, but I think direct subsidy of publishers by funding agencies makes more sense (although this too has its problems).

But let’s remember that the current system has massive incentive problems as well – there is no incentive for the people who actually make the important decisions – authors deciding where to sent their papers – to factor in the economic value provided by the publisher, since the costs are born by libraries who are usually completely disconnected from the publishing decision. And because of this publishers have driven up their costs to the maximum level they can squeeze out of institutions, who are often in the untenable decision of having to choose between paying escalating costs and providing needed access to the literature to their researchers. 


Financing article processing charges will be most problematic in middle-income countries. Most non-predatory OA publishers grant fee waivers to scholars from lower-income countries (as long as they don’t submit too many articles), but these waivers are generally not applied to many middle-income countries. Researchers in these countries are caught in a dilemma – they aren’t eligible for publisher-granted APC waivers, but their funding agencies lack the funds to subsidize the publication of their works, so they are left to fend for themselves when it comes to paying article processing charges.

This is also true. But again, remember that these countries are also horribly screwed by the current system – as they neither qualify for free access to journals, nor can they afford to subscribe to them. 

In the end, the best way to address this is to lower the costs of publishing as much as possible. It is remarkable how little technology has driven down the costs of scholarly publishing – most of which involve tasks that could easily be handled with good software (formatting manuscripts, organizing peer review, etc…) but which are now done manually. You are already seeing journals whose costs are much, much lower (e.g. PeerJ) and I think you will see more of a trend in this direction as publishers actually start to respond to price pressure – something that has been completely absent from the subscription publishing world. 

And now we are seeing the emergence of mega gold-open-access publishers. I’ve documented that Hindawi’s profit margin is higher than Elsevier’s and achieves this by lowering standards (Beall 2013a). Hindawi has eliminated the position of editor-in-chief from most of the firm’s over 550 journals. The company exploits Egypt’s high unemployment rate by paying minimal salaries, employing college-educated staff desperate for jobs. It’s an example of the scholarly publishing industry moving offshore. Moreover, because the journals lack editors, they have become desultory collections of loosely-related articles on a broad topic. The editorless journals lack coherence and vitality and function more like sterile repositories than scholarly publications. Open-access is killing the community function of scholarly journals, in which they served as fora for the exchange of both formal and informal communication among colleagues in a particular field or sub-field. Open access journals lack soul and are disconnected.

This is bunk. There are maybe a handful of subscription journals that have any kind of real identity. They are mostly a collection of papers who have found an appropriate level in the jockeying for impact. The society journals that Beall speaks of so nostalgically are under threat – but their enemy is not open access, it’s the impact factor. They have also been undermined by the transformation of many societies from actual collections of peers into organizations that are primarily journal publishers.

I also find it curious that Beall is so concerned about the plight of researchers in the developing world in some areas, but seems to want to deny them the right to start their own publishers. Hindawi is still trying to find it’s feet as a publishers, but I have come across several extremely good articles in Hindawi journals and I think, rather than denying them the right to exist, we should work to encourage their development into a respected members of the publishing community. 

Open access advocates think they know better than everyone else and want to impose their policies on others. Thus, the open access movement has the serious side-effect of taking away other’s freedom from them. We observe this tendency in institutional mandates. Harnad (2013) goes so far as to propose a table of mandate strength, with the most restrictive pegged at level 12, with the designation “immediate deposit + performance evaluation (no waiver option)”. 

A social movement that needs mandates to work is doomed to fail. A social movement that uses mandates is abusive and tantamount to academic slavery. Researchers need more freedom in their decisions not less. How can we expect and demand academic freedom from our universities when we impose oppressive mandates upon ourselves?

Once again, Beall manifests a poor understanding of how academia works. The current system is completely oppressive. While there is the illusion of choice, in reality researchers are under intense pressure to publish in a very narrow number of journals that effectively represent the choice between Coke and Pepsi. Also, researchers at major universities who receive funding from governments or foundations already operate under all sorts of mandates – most notably the requirement that they publish their work in the first place. Why is it okay to demand that people publish, but not okay to demand that people have access to the published work? 

Gold Open Access is Failing

In 2006, James S. E. Opolot, Ph.D., a professor at Texas Southern University in Houston, published an article entitled “The Challenges of Environmental Crimes and Terrorism in Africa: Evidence from Eastern, Southern, and West African Countries” (Opolot 2006). The article was published in The International Journal of African Studies, one of the journals in the portfolio of the open-access (and predatory) publisher called Euro-Journals. One might assume that Euro-Journals would be based in Europe, but predatory publishers often disguise their true locations and use the names of Western countries to make themselves appear legitimate. Euro-Journals is based in Mauritius.

The open-access version of Professor Opolot’s paper has disappeared from the Internet. Plagued by takedown requests due the high incidence of plagiarism among its articles, Euro- Journals decided to switch the distribution model for some of its journals to the subscription model, and it removed all of their content from the open Internet. The publisher simply stopped publishing the balance of its journals, and it removed all of their content from the Internet as well. A blog post I wrote in March 2013 (Beall 2013b) showed that the publisher had 29 journals in its portfolio. Among these, 10 became toll-access journals, and nineteen disappeared from the Internet. Dr. Opolot’s paper was published in one of the journals whose content was removed, apparently permanently, from the Internet. I expect this process to repeat itself many times over in the coming years with other open-access publishers.

This is the worst form of cherry-picking. Open access publishing is “failing” because one open access publisher that published an insignificant number of papers went out of business? There are huge numbers of papers being published in open access journals (PLOS, BMC, and many others) that take archiving seriously. Indeed legitimate open access journals have the advantage of having all of their contents permanently archived by the National Library of Medicine – far more stable than any journal publisher. 

The open-access movement has been a blessing to anyone who has unscientific ideas and wants to get these ideas into print. Because the predatory publishers care very little about peer review and see it merely as a charade that must be performed, they don’t really care when pseudo-science gets published in their journals, as long as they get paid for it. In my blog, I’ve given examples of pseudo-science being published as if it were true science. Here are three examples:

    • The Theory of Metarelativity: Beyond Albert Einstein’s Relativity (Jaoude 2013)
    • Prevalence of Autism is Positively Associated with the Incidence of Type 1 Diabetes, but Negatively Associated with the Incidence of Type 2 Diabetes, Implication for the Etiology of the Autism Epidemic (Classen 2013)
    • Combating Climate Change with Neutrinos (Wet 2013).

Beall missed perhaps the most egregious example of drivel being published in open access journals:

The Open-Access Movement is Not Really about Open Access (Beall 2013)

But seriously. Yes, there is crap published in open access journals. But like Bohannon before him, Beall has no perspective. There is a long history of bunk science being published in subscription based journals – including the highly prestigious ones. There are, and always have been, journals at the margins of respectability that will publish anything. To blame this on open access by picking a few examples is ridiculous. 

The last of these, “Combating Climate Change with Neutrinos”, was summarily retracted (without any notice) by the publisher after I drew attention to it in a blog post (Beall 2013c). I saved a copy of the article’s PDF and have made that document available on the blog post. There are many unscientific ideas that people can get published in scholarly journals thanks to predatory open-access publishing. Authors of these works find that their ideas fail peer review in legitimate journals, so they seek out predatory publishers that are more than happy to accommodate their publishing needs. Some of these ideas include issues relating to sea-level rise (or the lack of it), Sasquatch, anthropogenic global warming (or the lack of it), the aetiology of autism, and the nature of dark matter and dark energy.

Often promoted as one of the benefits of open-access is the fact that everyone, even the lay public, will have access to all the scientific literature. But in the context of pseudo-science being published bearing the imprimatur of science, this becomes a serious problem. People who are not experts in a given field generally lack both the ability to understand the most complex research in the field and the ability to distinguish between authentic and bogus research in the discipline. As more bogus research continues to be published open-access, it will be accessed more by the public, and many will accept it as valid research. This bogus research will poison discourse in many scientific fields and will create a public that is misinformed on many scientific issues.

The public accepting peer reviewed research as fact without skepticism is indeed a problem. But let’s ask ourselves what was the most egregious example of this in the last decade? Has to be Andrew Wakefield’s papers on the link between vaccines and autism. Where were they published? The Lancet, Gastroenteroloy and the American Journal of Gastroenterology. All subscription journals. Is this bad? Yes. Is this a problem with open access? Of course not.  

Megajournals are becoming like digital repositories. These journals, many of them now editorless, are losing the cohesion, soul, and community-binding roles that scholarly journals once had. My website has its main list of publishers, but in early 2012 I was compelled to create a second list, a list of what I refer to as predatory standalone journals. These are predatory journals that cover the entire breadth of human knowledge, much broader than just science. Predatory publishers discovered the megajournal model by copying “successes” like PLOS ONE. As of late November 2013, I have 285 megajournals in my standalone journal list. They have titles like Journal of International Academic Research for Multidisciplinary [sic], International Journal of Sciences, and Current Discovery. The broad titles reflect the marketing strategy of accepting as many papers as possible, in order to maximize income. How many megajournals does the world need? Most of these journals exist only for the authors, those who need academic credit. Many of their articles will never be read, and many are plagiarized from earlier articles. The articles then become the source of future plagiarism. Collectively, they lower the quality of science and science communication. They clutter Google and Bing search results with academic rubbish.

We don’t need 285 megajournals. I agree. But we also don’t need 10,000 subscription journals. I’d argue we don’t need journals at all. But Beall’s math is misleading. There may be 285 megajournals (I’ll take him at his word), but the vast majority of papers published in these journals are in a very small number (with PLOS ONE at the front). Saying that megajournals are bad because there are a lot of (largely failed) efforts to copy the success of one is like saying that search engines are bad because there are hundreds of useless and poorly used ones trying to copy the success of Google. 

The future of the Creative Commons Attribution License (CC BY) may be in doubt. Numerous companies are emerging that aggregate content from CC BY-licensed works, publish them in new formats, and sell them at a profit. Frequently, when scholars find out that their work has been published for profit without their knowledge, their first reaction is often anger, even though they freely assigned the free license to their work. They feel betrayed. The CC- BY license has been promoted by European open-access advocates; the North Americans’ view of open-access is more restrictive. Many here prefer to promote the CC BY NC (non- commercial) license. For many in North America, the concept of open-access itself means “ocular” open-access – that is, OA means that you can access content but can’t do much else with it, other than read it. The Europeans are more collectivist and appropriative; for them scholarly publishing is another opportunity for taking. They do not respect the freedom of the press when the free press doesn’t adopt their collectivist values.

This is a complete red herring. I’ve heard people raise this as a potential problem, but very very few complaints about it actually happening. And even when I have, it’s always been possible to explain why PLOS and other OA publishers prefer the CC-BY license. In contrast, I hear all the time from publishers that they want to use CC-BY-NC, not to protect against misuse, but to protect their revenues. Thus it is absurd to attribute any reluctance to use CC-BY to authors. 

We mustn’t forget the strengths of the traditional or subscription model of scholarly journal publishing. When space was an issue, journals could only publish the very best of the articles they received, and any lapse in quality over time led to subscription cancellations. The result was that the traditional journals presented the cream of the crop of current research. With open-access journals, the opposite is often true.

Indeed, when many libraries began to engage in journal cancellations in response to higher subscription prices (subscription prices increased mainly due to a great increase in the amount of scholarship being published), the subscription publishers came up with a solution that has greatly benefitted libraries: bundling and differential pricing. This innovation has greatly benefitted scholars by making a great amount of research affordable to academic libraries. On top of this, many publishers grant additional discounts to library consortia licensing journal subscriptions in bulk. According to Odlyzko (2013, 3) “the median of the number of serials received by ARL [Association of Research Libraries] members almost quadrupled during the period under investigation, going from 21,187 in the 1989-1990 academic year to 80,292 in the 2009-2010 one. Practically the entire increase took place during the last half a dozen years, without any big changes in funding patterns, and appears to be due primarily to ‘Big Deals’”. This finding shows the power of the market; when subscribers cut subscriptions, publishers take beneficial action for consumers.

Beall has to be the only person on the planet – outside of the Elsevier board room – who thinks “Big Deals” are a good idea. Virtually everyone I know in the library world – including many who are not fans of open access – think that “Big Deals” are a very bad idea, and university systems across the world have been abandoning them. 

OA journals don’t have any space restrictions. They can publish as many articles per issue as they want, so the incentive for them is to publish more. We hear less about acceptance rates than we did in the past because of this.

Why does Beall think subscription journals have any limit on the amount of articles they can publish? Since almost nobody accesses these journal in print (aside from Science, Nature, Cell and a few others), they don’t. The only reason they limit what they publish is to create an artificial scarcity. And precisely because of the “Big Deal”s Beall seems to love, subscription publishers have exactly the same incentive. Big Deals have created an economy in which subscription publishers are directly rewarded with higher subscription revenues when they publish more papers. 

There is one, and only one, reason for the massive increase in the number of subscription journals over the past few decades. It’s not because the community has been clamoring. It’s because publishers know that the easiest way for them to increase their revenues is to launch new titles and publish as many papers in them as possible. That is why Big Deal publishers like Elsevier specialize in launching new journals that provide no knew value to the community (most overlap existing journals in scope and selectivity), but provide huge benefit to Elsevier.  

Traditional journals didn’t have the built-in conflict of interest that gold open-access journals have. For gold OA, the more papers a journal accepts, the more money it makes.

As I pointed out above, there is a direct correlation between the number of articles subscription based publishers accept and their revenues. Thus subscription based publishers have as much of a conflict of interest as OA publishers – it’s just hidden from view because the money is laundered through libraries. 

Money is corrupting scholarly publishing. Scholars never should have allowed a system that requires monetary transactions between authors and publishers. Libraries took responsibility for this financial role in the past, and they performed it well. Now the realm of scholarly communication is being removed from libraries, and a crisis has settled in. Money flows from authors to publishers rather than from libraries to publishers. We’ve disintermediated libraries and now find that scholarly system isn’t working very well.

Most libraries have done great work providing scholars with access to the literature they need to perform their jobs. But it’s a bit ridiculous to say that the system has thrived on their watch. For decades the cost of scholarly publishing has increased at a rate that far exceeds the rate of inflation, and it has done so precisely because scholars have not been involved in the financial transaction. A system in which scholars decide where to publish but have zero incentive to make choices based on cost leads to out of control spending increases. Of course libraries aren’t responsible for this – they have been left in charge of paying the bills without any effective way to keep costs down. Indeed, librarians were the first to begin writing about this problem – as long ago as the 1980s – warning that increases in costs were unsustainable. But if we’re actually going to tackle the ever escalating costs of publishing it will be by giving authors an incentive to make publishing choices based on cost – something that open access does, but subscription based publishing does not.

Conclusion

The open-access movement isn’t really about open access. Instead, it is about collectivizing production and denying the freedom of the press from those who prefer the subscription model of scholarly publishing. It is an anti-corporatist, oppressive and negative movement, one that uses young researchers and researchers from developing countries as pawns to artificially force the make-believe gold and green open-access models to work. The movement relies on unnatural mandates that take free choice away from individual researchers, mandates set and enforced by an onerous cadre of Soros-funded European autocrats.

Ooh. That’s scary. Soros-funded Europan autocrats. 

The open-access movement is a failed social movement and a false messiah, but its promoters refuse to admit this. The emergence of numerous predatory publishers – a product of the open-access movement – has poisoned scholarly communication, fostering research misconduct and the publishing of pseudo-science, but OA advocates refuse to recognize the growing problem. By instituting a policy of exchanging funds between researchers and publishers, the movement has fostered corruption on a grand scale. Instead of arguing for open-access, we must determine and settle on the best model for the distribution of scholarly re- search, and it’s clear that neither green nor gold open-access is that model.

Open access IS a social movement. Not only will I not deny that. I am proud of it. It’s a social movement based on the principle that scholarly research is a social good and those of us lucky enough to be involved in it should do everything we can to make sure that we do not let our vanity and narrow self-interest prevent us from making sure that our fields operate in the most efficient way, and that we give back to society in every way possible.

But open access is also a business model. And it’s a very successful one that is growing in popularity. Predatory open access publishers are a problem – but they’re a minor one that can easily be dealt with by establishing and enforcing standards for good journal practices.


It’s too bad Beall turns out to be so stridently anti open-access. He deserves credit for almost single-handedly raising awareness about predatory publishers trying to take advantage of the rise of open access – a problem nobody else was noticing let alone trying to do something about. He could have been a constructive force in helping to develop ways to counter this trend – as it is we’ll have to work it out on our own.

Posted in open access | Tagged , , , | Comments closed

The impact of Randy Schekman abandoning Science and Nature and Cell

Recipients of this year’s Nobel Prizes converge this week on Stockholm to receive their medals, dine with the King and Queen, and be treated like the scientific royalty they have become. For most this time is, understandably, about them and their work. So, bravo to my Berkeley colleague Randy Schekman – one of this year’s recipients of the prize in Physiology/Medicine – for using the spotlight to cast a critical eye at the system that brought him to this exalted level.

In a column in The Guardian, Randy writes:

I am a scientist. Mine is a professional world that achieves great things for humanity. But it is disfigured by inappropriate incentives […] We all know what distorting incentives have done to finance and banking. The incentives my colleagues face are not huge bonuses, but the professional rewards that accompany publication in prestigious journals – chiefly NatureCell and Science.

He goes on to make his case for why these high-impact subscription journals are so toxic, and finishes with a pledge:

Like many successful researchers, I have published in the big brands, including the papers that won me the Nobel prize for medicine, which I will be honoured to collect tomorrow. But no longer. I have now committed my lab to avoiding luxury journals, and I encourage others to do likewise.

I gave up publishing in Science, Nature, Cell and all other subscription-based journals when I started as a junior faculty at Berkeley in 2000, and having devoted immense amounts of time and energy over the ensuing 13 years to convincing other scientists to do the same. I co-founded a publisher – PLOS – whose raison d’etre was to provide authors with an alternative to the big-name subscription publishers Randy so rightly takes to task.

Yet despite great success – my career has flourished without publications in the “big three”, and PLOS is now a major player in the publishing work – it is a measure of just how far we have to go – just how powerful the incentives to publish in “high impact” journals are – that Randy’s announcement is big news.

I hope that Randy will serve as inspiration – an example for others to follow. But, sadly, I suspect the will not. Lots of people have already dismissed his shift as the easy action of someone who had already “got his”. And of course they’re right. Even before his Nobel Prize, Randy was a science superstar whose papers would have been read even if he had done nothing more than tape printed copies to the bulletin board outside of his office. His students and postdocs don’t need a Science, Nature or Cell paper to get taken seriously – they only need a good letter from their now Nobel Laureate advisor.

I know that most people will dismiss Randy’s example, because they have done it to me. Even though I gave up subscription journals at the beginning of my independent career – before I had students, grants or tenure – most people I talk to say “Good for you. But you were trained in high-profile labs, you had Science and Nature papers as a postdoc, and you were already well known. You could get away with it. I can’t.” It’s all true. I understand why – especially in this horrible funding climate – people are unwilling to shun a game that they may despise, but which almost everybody tells them they have to play to survive. And since everything that is true about me is 100 times more true about Randy, his followers are likely to come primarily from the far upper tier of scientists.

This is sad. Because we need to listen to him. Indeed we need to take him one step further. While I admire everything eLife is doing to make the process of peer review saner, they still reject a lot of good papers that don’t meet the reviewers’ and editors’ standards of significance. As I’ve written elsewhere [1][2], we need to dispense entirely with journals and with the idea that a few reviewers – no matter how wise – can decide how significant a work is at the time. But whether you support Randy’s vision of sane pre-publication peer review, or my vision of a journal free world built around post-publication review, we have the same problem – we need more than a handful Nobel Prize winners and true believers to abandon the current system. So what’s it going to take?

Fifteen years ago, when I first became involved in reforming science publishing, the big problem was there were no alternatives. Now there are plenty – there’s eLife, PLOS, BMC and many others who are attacking various pathologies in science publishing. But still SNC maintain their allure. And they will continue to do so until people no longer believe they are the ticket to success. It’s a nasty, self-fulfilling prophesy. Most biomedical scientists send their best work to SNC, and so there’s a correlation between who gets jobs/grants/tenure and publishing in SNC, and so the next generation thinks they have to publish in SNC to get jobs/grants/tenure and on and on and on.

We could all just choose to stop. Start sending your best work to eLife instead. Or just do what we should all do and send ALL of our work to PLOS ONE, BMC and other journals that don’t consider significance in the publishing decision. We SHOULD do that. But, listening to people out there, I don’t think most scientists are ready to.

I think a better place to work is on hiring, grants and tenure. If we all commit to NEVER looking at the journal in which a paper appeared when we’re evaluating someone, and if we speak up when anyone else does it. If we really endeavor to judge people solely by the contents of their manuscripts word will slowly get out, and people will stop thinking it’s worth it to go through the slog of review at SNC. They’ll stop spending months doing pointless experiments that will make their work “sexier” to editors and reviewers.

And maybe we’ll start seeing Nobel Prize winners whose work was never published in Science, Nature or Cell – and nobody will even notice.

Posted in open access | Tagged , , | Comments closed

FDA vs. 23andMe: How do we want genetic testing to be regulated?

Yesterday the US Food and Drug Administration sent a letter to the human genetic testing company 23andMe giving them 15 days to respond to a series of concerns about their products and the way they are marketed or risk regulatory intervention. This action has set off a lot of commentary/debate about the current and future value of personal genomics, whether these tests should be available direct to consumers or require the participation of a doctor, and what role the government should play in regulating them.

I am a member of the Scientific Advisory Board for 23andme, but I am writing here in my individual capacity as a geneticist who wants to see human genetic data used widely but wisely (although I obviously have an interest in the success of 23andme as a company – so I can not claim to be unbiased).

I see a wide range of opinions from my friends on this matter – ranging from “F**k the FDA – who are they to tell me what I can and can not learn about my DNA” to “Personalized genomics is snake oil and it’s great that the FDA is stepping in to regulate it”. I fall somewhere in the middle – I think there is great promise in personalized genetics, but at the moment it is largely unrealized. Looking at your own DNA is really interesting, but it only rarely provides actionable new information. I don’t think the FDA should restrict consumer access to their genotype or DNA sequence, but I do think the government has an important role to play in ensuring that consumers get accurate information and that the data are not oversold in the name of selling products.

As people try to decide what kinds of tests and information should be available and how the government should regulate them, I think it’s useful to ask a series of questions.

1) Should a person be able to have their DNA sequenced and get the data?

Putting aside any questions about how useful this information is right now and how it is marketed, do you think companies should be able to offer a service where consumers send in a spit or blood sample and a few hundred dollars and get their genome sequenced in return? (23andme currently provides SNP genotyping, not whole genome sequencing, but we’re very close to the point where human genome sequencing is cheap and reliable enough to make this possible.)

I think the answer is obviously yes. I can’t see any good argument for why we should prevent people who to from obtaining their own DNA sequence.

Which leads to:

2) Should a person who has had their genome sequenced be able to access scientific literature relevant to their genome? 

Again, putting aside questions about the accuracy or utility of this information, there is a lot of published scientific literature that is potentially relevant to people with a particular genotype (including genome-wide association studies as well as a lot of classical human genetic literature and other functional studies). Assuming someone has their own genome sequence, it would be hard to argue they shouldn’t have access to information that would allow them to understand what their genome means.

Which leads to:

3) Is there a role for third parties in helping people interpret their genome sequence? 

The problem with the previous question is that it would be next to impossible for someone to actually interpret their genome simply by perusing the scientific literature (and I’m not even going to get started on the fact that much of this literature is behind paywalls). Even trained human geneticists wouldn’t do that. They’d go to some website – OMIM, DECIPHER, etc… – and use various automated tools to extract what is known about their genotype.

But few people have the technical savvy to be able to analyze their own genome in this way. So, assuming there is interest, there is a great niche for third parties to step in an provide services to people to help them interpret their own DNA. Is this a bad thing? Again, I don’t see how it is – assuming that these third parties provide accurate information (more on this below).

Should this third party be a doctor, as some (mostly doctors) are arguing? There are certainly doctors out there who have a great grasp of human genetics. But there aren’t a lot of them. And even the doctors who do know the world of human genetics inside and out aren’t in a position to help people navigate every nook and cranny of their genome. This is a job for software, not for people.

If you accept points 1,2 and 3 above – which to me seem inarguable – then you accept the right for companies like 23andme to exist. You might not think they provide a valuable service. You might not think they do a good job at providing these services. But you can’t argue – as many are now doing – that direct-to-consumer genetic testing companies should be shut down.

Should direct-to-consumer genetic testing companies be regulated? 

I think this is also a no-brainer. Obviously they should be regulated – and fairly tightly so in my opinion. Few consumers have the capacity to judge on their own whether the genetic testing services provided by a company are accurate and whether interpretive information provided by third parties is valid. It is vital that the FDA protect consumers in two ways: 1) by validating the tests and the companies that provide them, and 2) by monitoring genetic information that is provided by consumers – especially if it is being used to market tests or other products. The former seems relatively easy – validating genotyping and sequencing is well-trodden turf. The latter is a bit more complicated.

If genetics were simple and our understanding of it were complete, companies could provide accurate reports that say “based on your genotype, your age and personal history, you have a 7.42% chance of developing ovarian cancer in the next 10 years”. However, we are far, far, far away from this. We have an incomplete catalog of human genetic variation; known genetic variation can explain only a small fraction of the heritable component of most phenotypes of interest; we have a poor understanding of how different genetic variants interact to affect disease risk or other phenotypes; and we have essentially no capacity to incorporate environmental effects into predictive models. In many cases current, incomplete, data may point to someone having an elevated risk of some disease, when they really have a lower than average risk. And, to top it all off, there are very few cases where knowing your risk status or other phenotype points to genotype-specific actions (with the BRCA status referred to in the FDA letter a notable exception).

The data are, at this point in time, very very messy. I don’t think anyone disagrees with that. The question is what to do about that. One the one side you have people who argue that the data are so messy, of so little practical value, and so prone to misinterpretation by a population poorly trained in modern genetics that we should not allow the information to be disseminated. I am not in this camp. But I do think we have to figure out a way for companies that provide this kind of information to be effectively regulated. The challenge is to come up with a regulatory framework that recognizes the fact that this information is – at least for now – intrinsically fuzzy.

The FDA wants to classify genetic tests like those offered by 23andme as medical devices, and to apply the appropriately strict criteria used for medical devices to genetic tests. But the problem with this is that contemporary genetic tests will almost certainly fail to meet these criteria, and I don’t see who benefits from that scenario. Genetic tests are simply not – at least not yet – medical devices in any meaningful sense of the word. They are far closer to family history than to an accurate diagnostic. The FDA and companies like 23andme need to come up with standards for accurately and honestly describing the current state of knowledge for genotype-phenotype linkages and their application to individual genotypes. They need to establish what generic statements can and can not be used to market genetic tests so that people don’t purchase them with unrealistic expectations about the kinds of information they will provide. Let’s hope this flareup between the FDA and 23andme is the spark that finally makes this happen.

 

Posted in genetics | Tagged , | Comments closed

PubMed Commons: Post publication peer review goes mainstream

I have written a lot about how I think the biggest problem in science communication today is the disproportionate value we place on where papers are published when assessing the validity and import of a work of science, and the contribution of its authors. And I have argued that the best way to change this is to develop a robust system of post publication peer review (PPPR) , in which works are assessed continuously after they are published so that flaws can be identified and corrected and so that the most credit is reserved for works that withstand the test of time.

There have been LOTS of efforts to get post-publication peer review off the ground – usually in the form of comments on a journal’s website – but these have, with few exceptions, failed to generate sustained use. There are lots of possible reasons for this – from poor implementation, to lack of interest on the part of potential discussants. However, I’ve always felt the biggest flaw was that these were on journal websites – that you had to think about where the work was published, and whether they had a commenting system, and whether you had an account, etc…

What we’ve always needed was a central place where you know you can always go to record comments on a paper you are reading, and, conversely, where you can get all of the comments other scientists have on a paper you’re reading or are interested in. There have been a couple of services that have tried to create such a system – cf PubPeer, which lets you comment on any paper in PubMed – but they have been slow to gain traction in the community.

The obvious place to build such a commenting/post publication review system has always been directly in PubMed – it has everything and everyone already uses it. This is why I am excited – and cautiously optimistic – about a new project called PubMed Commons that will allow registered users (for now primarily NIH grantees) to post comments on any paper in PubMed, which will then appear alongside the paper when it is received in a search.

Here is how PubMed Commons describes itself:

PubMed Commons is a system that enables researchers to share their opinions about scientific publications. Researchers can comment on any publication indexed by PubMed, and read the comments of others.

PubMed Commons is a forum for open and constructive criticism and discussion of scientific issues. It will thrive with high quality interchange from the scientific community.

The system is still pretty threadbare – it only allows simply commenting, and not, for example, rating of the work – but I’ve used it and it is easy to get in, comment and get out. A lot more info on the project can be found here.

This is a great opportunity for us to make PPPR real. But it’s only going to work if people participate. So, if you’re an NIH grantee, and you want to see science communication improve, make a commitment to comment in a paper you’ve read at least once a week, and let’s make this thing work!!

Posted in open access, public access, publishing | Comments closed

GMOs and pediatric cancer rates #GMOFAQ

There’s a post being highlighted by anti-GMO activists on Twitter that claims that cancer is now the leading cause of death among children in the US, that the rates of pediatric cancer are increasing and that this is because of GMOs. This is another egregious example of the willingness of anti-GMO campaigners to lie to the public in order to scare them and promote their agenda.

A simple look at data exposes the absurdity of their claims:

1) Cancer is not the leading cause of death among children in the United States

The Centers for Disease Control publishes annual statistics on the leading causes of death in the US broken down by age. These data show that malignant neoplasms are a serious problem – killing over 1,000 children under the age of 14 every year – making it the leading cause of disease-related death in children. But accidents remain the major cause of death by far.

One other thing to note from this table is the top 5 in any age group. This was not always the case, and is almost entirely the result of vaccination, another evil of modern science often highlighted by the same people who oppose GMOs.

2) Childhood cancer rates are not increasing

Another claim cited by the anti-GMO crowd is that childhood cancer rates are increasing at an “alarming rate”. Again, data says otherwise. Here is a report from the National Cancer looking at rates of childhood cancer from 1988 to 2008 that shows that they are virtually unchanged.

Screen Shot 2013-10-12 at 11.50.56 AM

3) There is no evidence that GMOs cause childhood cancer

If GMOs caused childhood cancer, you would expect there to be some difference in the rate of childhood cancer in the US after the introduction of GMOs into the US food supply in 1995. However the rate of childhood cancer has remained unchanged from its pre-1995 levels.

Childhood cancer is a horrible, horrible thing. We should do everything in our power to prevent and better treat it so that cancer, like infectious disease, disappears from statistics on childhood mortality. But it doesn’t do anyone any good to misrepresent the statistics in the name of a political agenda. So please anti-GMO campaigners, stop making stuff up, and stop using false statistics to try to scare people.

Posted in GMO | Comments closed

I confess, I wrote the Arsenic DNA paper to expose flaws in peer-review at subscription based journals

In 2011, after having read several really bad papers in the journal Science, I decided to explore just how slipshod their peer-review process is. I knew that their business depends on publishing “sexy” papers. So I created a manuscript that claimed something extraordinary – that I’d discovered a species of bacteria that uses arsenic in its DNA instead of phosphorus. But I made the science so egregiously bad that no competent peer reviewer would accept it. The approach was deeply flawed – there were poor or absent controls in every figure. I used ludicrously elaborate experiments where simple ones would have done. And I failed to include a simple, obvious experiment that would have definitively shown that arsenic was really in the bacteria’s DNA. I then submitted the paper to Science, punching up the impact the work would have on our understanding of extraterrestrials and the origins of life on Earth in the cover letter. And what do you know? They accepted it!

My sting exposed the seedy underside of “subscription-based” scholarly publishing, where some journals routinely lower their standards – in this case by sending the paper to reviewers they knew would be sympathetic – in order to pump up their impact factor and increase subscription revenue. Maybe there are journals out there who do subscription-based publishing right – but my experience should serve as a warning to people thinking about submitting their work to Science and other journals like it. 

OK – this isn’t exactly what happened. I didn’t actually write the paper. Far more frighteningly, it was a real paper that contained all of the flaws described above that was actually accepted, and ultimately published, by Science.

I am dredging the arsenic DNA story up again, because today’s Science contains a story by reporter John Bohannon describing a “sting” he conducted into the peer review practices of open access journals. He created a deeply flawed paper about molecules from lichens that inhibit the growth of cancer cells, submitted it to 304 open access journals under assumed names, and recorded what happened. Of the 255 journals that rendered decisions, 157 accepted the paper, most with no discernible sign of having actually carried out peer review. (PLOS ONE, rejected the paper, and was one of the few to flag its ethical flaws).

The story is an interesting exploration of the ways peer review is, and isn’t, implemented in today’s biomedical publishing industry. Sadly, but predictably, Science spins this as a problem with open access. Here is their press release:

Spoof Paper Reveals the “Wild West” of Open-Access Publishing

A package of news stories related to this special issue of Science includes a detailed description of a sting operation — orchestrated by contributing news correspondent John Bohannon — that exposes the dark side of open-access publishing. Bohannon explains how he created a spoof scientific report, authored by made-up researchers from institutions that don’t actually exist, and submitted it to 304 peer-reviewed, open-access journals around the world. His hoax paper claimed that a particular molecule slowed the growth of cancer cells, and it was riddled with obvious errors and contradictions. Unfortunately, despite the paper’s flaws, more open-access journals accepted it for publication (157) than rejected it (98). In fact, only 36 of the journals solicited responded with substantive comments that recognized the report’s scientificproblems. (And, according to Bohannon, 16 of those journals eventually accepted the spoof paper despite their negative reviews.) The article reveals a “Wild West” landscape that’s emerging in academic publishing, where journals and their editorial staffs aren’t necessarily who or what they claim to be. With his sting operation, Bohannon exposes some of the unscrupulous journals that are clearly not based in the countries they claim, though he also identifies some journals that seem to be doing open-access right.

Although it comes as no surprise to anyone who is bombarded every day by solicitations from new “American” journals of such-and-such seeking papers and offering editorial positions to anyone with an email account, the formal exposure of hucksters out there looking to make a quick buck off of scientists’ desires to get their work published is valuable. It is unacceptable that there are publishers – several owned by big players in the subscription publishing world – who claim that they are carrying out peer review, and charging for it, but no doing it.

But it’s nuts to construe this as a problem unique to open access publishing, if for no other reason than the study, didn’t do the control of submitting the same paper to subscription-based publishers (UPDATE: The author, Bohannon emailed to say that, while his original intention was to look at all journals, practical constraints limited him to OA journals, and that Science played no role in this decision). We obviously don’t know what subscription journals would have done with this paper, but there is every reason to believe that a large number of them would also have accepted the paper (it has many features in common with the arsenic DNA paper afterall). Like OA journals, a lot of subscription-based journals have businesses based on accepting lots of papers with little regard to their importance or even validity. When Elsevier and other big commercial publishers pitch their “big deal”, the main thing they push is the number of papers they have in their collection. And one look at many of their journals shows that they also will accept almost anything.

None of this will stop anti-open access campaigners  (hello Scholarly Kitchen) from spinning this as a repudiation for enabling fraud. But the real story is that a fair number of journals who actually carried out peer review still accepted the paper, and the lesson people should take home from this story not that open access is bad, but that peer review is a joke. If a nakedly bogus paper is able to get through journals that actually peer reviewed it, think about how many legitimate, but deeply flawed, papers must also get through. Any scientist can quickly point to dozens of papers – including, and perhaps especially, in high impact journals – that are deeply, deeply flawed – the arsenic DNA story is one of many recent examples. As you probably know there has been a lot of smoke lately about the “reproducibility” problem in biomedical science, in which people have found that a majority of published papers report facts that turn out not to be true. This all adds up to showing that peer review simply doesn’t work.

And the real problem isn’t that some fly-by-night publishers hoping to make a quick buck aren’t even doing peer review (although that is a problem). While some fringe OA publishers are playing a short con, subscription publishers are seasoned grifters playing a long con. They fleece the research community of billions of dollars every year by convincing them of something manifestly false – that their journals and their “peer review” process are an essential part of science, and that we need them to filter out the good science – and the good scientists – from the bad. Like all good grifters playing the long con, they get us to believe they are doing something good for us – something we need. While they pocket our billions, with elegant sleight of hand, then get us to ignore the fact that crappy papers routinely get into high-profile journals simply because they deal with sexy topics.

But unlike the fly by night OA publishers who steal a little bit of money, the subscription publishers’ long con has far more serious consequences. Not only do they traffic in billions rather than thousands of dollars and denying the vast majority of people on Earth access to the findings of publicly funded research, the impact and glamour they sell us to make us willing participants in their grift has serious consequences. Every time they publish because it is sexy, and not because it is right, science is distorted. It distorts research. It distorts funding. And it often distorts public policy.

To suggest – as Science (though not Bohannon) are trying to do – that the problem with scientific publishing is that open access enables internet scamming is like saying that the problem with the international finance system is that it enables Nigerian wire transfer scams.

There are deep problems with science publishing. But the way to fix this is not to curtain open access publishing. It is to fix peer review.

First, and foremost, we need to get past the antiquated idea that the singular act of publication – or publication in a particular journal – should signal for all eternity that a paper is valid, let alone important. Even when people take peer review seriously, it is still just represents the views of 2 or 3 people at a fixed point in time. To invest the judgment of these people with so much meaning is nuts. And its far worse when the process is distorted – as it so often is – by the desire to publish sexy papers, or to publish more papers, or because the wrong reviewers were selected, or because they were just too busy to do a good job. If we had, instead, a system where the review process was transparent and persisted for the useful life of a work (as I’ve written about previously), none of the flaws exposed in Bohannon’s piece would matter.

Posted in open access, science | Comments closed

NASA paywalls first papers arising from Curiosity rover, I am setting them free

The Mars Curiosity rover has been a huge boon for NASA – tapping into the public’s fascination with space exploration and the search for life on other planets. Its landing was watched live by millions of people, and interest in the photos and videos it is collecting is so great, that NASA has had to relocate its servers to deal with the capacity.

So what does NASA do to reward this outpouring of public interest (not to mention to $2.5 billion taxpayer dollars that made it possible)? They publish the first papers to arise from the project behind a Science magazine’s paywall:

 

curiositypaywall

There’s really no excuse for this. The people in charge of the rover project clearly know that the public are intensely interested in everything they do and find. So I find it completely unfathomable that they would forgo this opportunity to connect the public directly to their science. Shame on NASA.

This whole situation is even more absurd, because US copyright law explicitly says that all works of the federal government – of which these surely must be included – are not subject to copyright. So, in the interests of helping NASA and Science Magazine comply with US law, I am making copies of these papers freely available here:

Update: Copyright

For those interested in the issue of copyright in works of the US government, please see the following:

Section 105 of US Copyright Act, which states:

Copyright protection under this title is not available for any work of the United States Government, but the United States Government is not precluded from receiving and holding copyrights transferred to it by assignment, bequest, or otherwise.

House Report 94-1476 which details the reasoning behind this provision:

The effect of section 105 is intended to place all works of the United States Government, published or unpublished, in the public domain. This means that the individual Government official or employee who wrote the work could not secure copyright in it or restrain its dissemination by the Government or anyone else, but it also means that, as far as the copyright law is concerned, the Government could not restrain the employee or official from disseminating the work if he or she chooses to do so. The use of the term “work of the United States Government” does not mean that a work falling within the definition of that term is the property of the U.S. Government.

The only ambiguity in the case of these Curiosity papers is that not all of the authors are US Government employees, and thus the work is, I am told “co-owned” by the authors. I’m not sure what effect this has on the ability of Science magazine to assert copyright in the work, since, at best, they are doing so at the behest of only a subset of the authors. The law makes it clear that its intent is to direct the US government authors to place the work in the public domain, and that any agreement they enter into to restrict access to the work is invalid. This is why I view the practice of taking works authored (and funded) by the US government and placing them behind paywalls to be illegitimate.

Update 2: JPL has now posted the articles on their site 

As of today these articles are now available to download from the JPL website. I assume this was done in response to this post and the attention it received. (They were not there on the 26th when the press releases went out – I looked. And you can see from the PDFs that they weren’t downloaded from the Science website until the 27th.) Let’s hope that in the future that all NASA papers – and indeed the results of all government funded research – are made immediately freely available.

Posted in open access, science | Comments closed