Publishers are routinely stealing content from American citizens

President Obama published an article in the Journal of the American Medical Association today discussing the current state of his health care reform initiatives. Fortunately, the article is not behind a paywall. But JAMA nonetheless asserts their ownership and right to control the article’s use, as they do on all articles they publish, by attaching the following to the article’s PDF.
Unfortunately for JAMA, they have no right to do this. Section 105 of US Copyright law makes clear that works of the US government – and POTUS is a government employee last time I checked – are not eligible for copyright protection in the US (and JAMA is in the US).

17 U.S. Code § 105 – Subject matter of copyright: United States Government works

Copyright protection under this title is not available for any work of the United States Government, but the United States Government is not precluded from receiving and holding copyrights transferred to it by assignment, bequest, or otherwise.

It is completely inexcusable for journals to assert a right they are aware they do not have, thereby undoubtedly leading to people failing to make use of the article in ways that they are clearly legally eligible to do, such as redistributing and reusing the content, as I am doing here.

Special Communication |

United States Health Care ReformProgress to Date and Next Steps

Barack Obama, JD1
1President of the United States, Washington, DC

 

ABSTRACT

Importance  The Affordable Care Act is the most important health care legislation enacted in the United States since the creation of Medicare and Medicaid in 1965. The law implemented comprehensive reforms designed to improve the accessibility, affordability, and quality of health care.

Objectives  To review the factors influencing the decision to pursue health reform, summarize evidence on the effects of the law to date, recommend actions that could improve the health care system, and identify general lessons for public policy from the Affordable Care Act.

Evidence  Analysis of publicly available data, data obtained from government agencies, and published research findings. The period examined extends from 1963 to early 2016.

Findings  The Affordable Care Act has made significant progress toward solving long-standing challenges facing the US health care system related to access, affordability, and quality of care. Since the Affordable Care Act became law, the uninsured rate has declined by 43%, from 16.0% in 2010 to 9.1% in 2015, primarily because of the law’s reforms. Research has documented accompanying improvements in access to care (for example, an estimated reduction in the share of nonelderly adults unable to afford care of 5.5 percentage points), financial security (for example, an estimated reduction in debts sent to collection of $600-$1000 per person gaining Medicaid coverage), and health (for example, an estimated reduction in the share of nonelderly adults reporting fair or poor health of 3.4 percentage points). The law has also begun the process of transforming health care payment systems, with an estimated 30% of traditional Medicare payments now flowing through alternative payment models like bundled payments or accountable care organizations. These and related reforms have contributed to a sustained period of slow growth in per-enrollee health care spending and improvements in health care quality. Despite this progress, major opportunities to improve the health care system remain.

Conclusions and Relevance  Policy makers should build on progress made by the Affordable Care Act by continuing to implement the Health Insurance Marketplaces and delivery system reform, increasing federal financial assistance for Marketplace enrollees, introducing a public plan option in areas lacking individual market competition, and taking actions to reduce prescription drug costs. Although partisanship and special interest opposition remain, experience with the Affordable Care Act demonstrates that positive change is achievable on some of the nation’s most complex challenges.

INTRODUCTION

Health care costs affect the economy, the federal budget, and virtually every American family’s financial well-being. Health insurance enables children to excel at school, adults to work more productively, and Americans of all ages to live longer, healthier lives. When I took office, health care costs had risen rapidly for decades, and tens of millions of Americans were uninsured. Regardless of the political difficulties, I concluded comprehensive reform was necessary.

The result of that effort, the Affordable Care Act (ACA), has made substantial progress in addressing these challenges. Americans can now count on access to health coverage throughout their lives, and the federal government has an array of tools to bring the rise of health care costs under control. However, the work toward a high-quality, affordable, accessible health care system is not over.

In this Special Communication, I assess the progress the ACA has made toward improving the US health care system and discuss how policy makers can build on that progress in the years ahead. I close with reflections on what my administration’s experience with the ACA can teach about the potential for positive change in health policy in particular and public policy generally.

IMPETUS FOR HEALTH REFORM

In my first days in office, I confronted an array of immediate challenges associated with the Great Recession. I also had to deal with one of the nation’s most intractable and long-standing problems, a health care system that fell far short of its potential. In 2008, the United States devoted 16% of the economy to health care, an increase of almost one-quarter since 1998 (when 13% of the economy was spent on health care), yet much of that spending did not translate into better outcomes for patients.1– 4 The health care system also fell short on quality of care, too often failing to keep patients safe, waiting to treat patients when they were sick rather than focusing on keeping them healthy, and delivering fragmented, poorly coordinated care.5,6

Moreover, the US system left more than 1 in 7 Americans without health insurance coverage in 2008.7 Despite successful efforts in the 1980s and 1990s to expand coverage for specific populations, like children, the United States had not seen a large, sustained reduction in the uninsured rate since Medicare and Medicaid began (Figure 18– 10). The United States’ high uninsured rate had negative consequences for uninsured Americans, who experienced greater financial insecurity, barriers to care, and odds of poor health and preventable death; for the health care system, which was burdened with billions of dollars in uncompensated care; and for the US economy, which suffered, for example, because workers were concerned about joining the ranks of the uninsured if they sought additional education or started a business.11– 16 Beyond these statistics were the countless, heartbreaking stories of Americans who struggled to access care because of a broken health insurance system. These included people like Natoma Canfield, who had overcome cancer once but had to discontinue her coverage due to rapidly escalating premiums and found herself facing a new cancer diagnosis uninsured.17

Figure 1.
Percentage of Individuals in the United States Without Health Insurance, 1963-2015

Data are derived from the National Health Interview Survey and, for years prior to 1982, supplementary information from other survey sources and administrative records. The methods used to construct a comparable series spanning the entire period build on those in Cohen et al8 and Cohen9 and are described in detail in Council of Economic Advisers 2014.10 For years 1989 and later, data are annual. For prior years, data are generally but not always biannual. ACA indicates Affordable Care Act.

Image not available.

In 2009, during my first month in office, I extended the Children’s Health Insurance Program and soon thereafter signed the American Recovery and Reinvestment Act, which included temporary support to sustain Medicaid coverage as well as investments in health information technology, prevention, and health research to improve the system in the long run. In the summer of 2009, I signed the Tobacco Control Act, which has contributed to a rapid decline in the rate of smoking among teens, from 19.5% in 2009 to 10.8% in 2015, with substantial declines among adults as well.7,18

Beyond these initial actions, I decided to prioritize comprehensive health reform not only because of the gravity of these challenges but also because of the possibility for progress. Massachusetts had recently implemented bipartisan legislation to expand health insurance coverage to all its residents. Leaders in Congress had recognized that expanding coverage, reducing the level and growth of health care costs, and improving quality was an urgent national priority. At the same time, a broad array of health care organizations and professionals, business leaders, consumer groups, and others agreed that the time had come to press ahead with reform.19 Those elements contributed to my decision, along with my deeply held belief that health care is not a privilege for a few, but a right for all. After a long debate with well-documented twists and turns, I signed the ACA on March 23, 2010.

PROGRESS UNDER THE ACA

The years following the ACA’s passage included intense implementation efforts, changes in direction because of actions in Congress and the courts, and new opportunities such as the bipartisan passage of the Medicare Access and CHIP Reauthorization Act (MACRA) in 2015. Rather than detail every development in the intervening years, I provide an overall assessment of how the health care system has changed between the ACA’s passage and today.

The evidence underlying this assessment was obtained from several sources. To assess trends in insurance coverage, this analysis relies on publicly available government and private survey data, as well as previously published analyses of survey and administrative data. To assess trends in health care costs and quality, this analysis relies on publicly available government estimates and projections of health care spending; publicly available government and private survey data; data on hospital readmission rates provided by the Centers for Medicare & Medicaid Services; and previously published analyses of survey, administrative, and clinical data. The dates of the data used in this assessment range from 1963 to early 2016.

The ACA has succeeded in sharply increasing insurance coverage. Since the ACA became law, the uninsured rate has declined by 43%, from 16.0% in 2010 to 9.1% in 2015,7 with most of that decline occurring after the law’s main coverage provisions took effect in 2014 (Figure 18– 10). The number of uninsured individuals in the United States has declined from 49 million in 2010 to 29 million in 2015. This is by far the largest decline in the uninsured rate since the creation of Medicare and Medicaid 5 decades ago. Recent analyses have concluded these gains are primarily because of the ACA, rather than other factors such as the ongoing economic recovery.20,21 Adjusting for economic and demographic changes and other underlying trends, the Department of Health and Human Services estimated that 20 million more people had health insurance in early 2016 because of the law.22

Each of the law’s major coverage provisions—comprehensive reforms in the health insurance market combined with financial assistance for low- and moderate-income individuals to purchase coverage, generous federal support for states that expand their Medicaid programs to cover more low-income adults, and improvements in existing insurance coverage—has contributed to these gains. States that decided to expand their Medicaid programs saw larger reductions in their uninsured rates from 2013 to 2015, especially when those states had large uninsured populations to start with (Figure 223). However, even states that have not adopted Medicaid expansion have seen substantial reductions in their uninsured rates, indicating that the ACA’s other reforms are increasing insurance coverage. The law’s provision allowing young adults to stay on a parent’s plan until age 26 years has also played a contributing role, covering an estimated 2.3 million people after it took effect in late 2010.22

Figure 2.
Decline in Adult Uninsured Rate From 2013 to 2015 vs 2013 Uninsured Rate by State

Data are derived from the Gallup-Healthways Well-Being Index as reported by Witters23 and reflect uninsured rates for individuals 18 years or older. Dashed lines reflect the result of an ordinary least squares regression relating the change in the uninsured rate from 2013 to 2015 to the level of the uninsured rate in 2013, run separately for each group of states. The 29 states in which expanded coverage took effect before the end of 2015 were categorized as Medicaid expansion states, and the remaining 21 states were categorized as Medicaid nonexpansion states.

Image not available.

Early evidence indicates that expanded coverage is improving access to treatment, financial security, and health for the newly insured. Following the expansion through early 2015, nonelderly adults experienced substantial improvements in the share of individuals who have a personal physician (increase of 3.5 percentage points) and easy access to medicine (increase of 2.4 percentage points) and substantial decreases in the share who are unable to afford care (decrease of 5.5 percentage points) and reporting fair or poor health (decrease of 3.4 percentage points) relative to the pre-ACA trend.24 Similarly, research has found that Medicaid expansion improves the financial security of the newly insured (for example, by reducing the amount of debt sent to a collection agency by an estimated $600-$1000 per person gaining Medicaid coverage).26,27 Greater insurance coverage appears to have been achieved without negative effects on the labor market, despite widespread predictions that the law would be a “job killer.” Private-sector employment has increased in every month since the ACA became law, and rigorous comparisons of Medicaid expansion and nonexpansion states show no negative effects on employment in expansion states.28– 30

The law has also greatly improved health insurance coverage for people who already had it. Coverage offered on the individual market or to small businesses must now include a core set of health care services, including maternity care and treatment for mental health and substance use disorders, services that were sometimes not covered at all previously.31 Most private insurance plans must now cover recommended preventive services without cost-sharing, an important step in light of evidence demonstrating that many preventive services were underused.5,6 This includes women’s preventive services, which has guaranteed an estimated 55.6 million women coverage of services such as contraceptive coverage and screening and counseling for domestic and interpersonal violence.32 In addition, families now have far better protection against catastrophic costs related to health care. Lifetime limits on coverage are now illegal and annual limits typically are as well. Instead, most plans must cap enrollees’ annual out-of-pocket spending, a provision that has helped substantially reduce the share of people with employer-provided coverage lacking real protection against catastrophic costs (Figure 333). The law is also phasing out the Medicare Part D coverage gap. Since 2010, more than 10 million Medicare beneficiaries have saved more than $20 billion as a result.34

Figure 3.
Percentage of Workers With Employer-Based Single Coverage Without an Annual Limit on Out-of-pocket Spending

Data from the Kaiser Family Foundation/Health Research and Education Trust Employer Health Benefits Survey.33

Image not available.

Before the ACA, the health care system was dominated by “fee-for-service” payment systems, which often penalized health care organizations and health care professionals who find ways to deliver care more efficiently, while failing to reward those who improve the quality of care. The ACA has changed the health care payment system in several important ways. The law modified rates paid to many that provide Medicare services and Medicare Advantage plans to better align them with the actual costs of providing care. Research on how past changes in Medicare payment rates have affected private payment rates implies that these changes in Medicare payment policy are helping decrease prices in the private sector as well.35,36 The ACA also included numerous policies to detect and prevent health care fraud, including increased scrutiny prior to enrollment in Medicare and Medicaid for health care entities that pose a high risk of fraud, stronger penalties for crimes involving losses in excess of $1 million, and additional funding for antifraud efforts. The ACA has also widely deployed “value-based payment” systems in Medicare that tie fee-for-service payments to the quality and efficiency of the care delivered by health care organizations and health care professionals. In parallel with these efforts, my administration has worked to foster a more competitive market by increasing transparency around the prices charged and the quality of care delivered.

Most importantly over the long run, the ACA is moving the health care system toward “alternative payment models” that hold health care entities accountable for outcomes. These models include bundled payment models that make a single payment for all of the services provided during a clinical episode and population-based models like accountable care organizations (ACOs) that base payment on the results health care organizations and health care professionals achieve for all of their patients’ care. The law created the Center for Medicare and Medicaid Innovation (CMMI) to test alternative payment models and bring them to scale if they are successful, as well as a permanent ACO program in Medicare. Today, an estimated 30% of traditional Medicare payments flow through alternative payment models that broaden the focus of payment beyond individual services or a particular entity, up from essentially none in 2010.37 These models are also spreading rapidly in the private sector, and their spread will likely be accelerated by the physician payment reforms in MACRA.38,39

Trends in health care costs and quality under the ACA have been promising (Figure 41,40). From 2010 through 2014, mean annual growth in real per-enrollee Medicare spending has actually been negative, down from a mean of 4.7% per year from 2000 through 2005 and 2.4% per year from 2006 to 2010 (growth from 2005 to 2006 is omitted to avoid including the rapid growth associated with the creation of Medicare Part D).1,40 Similarly, mean real per-enrollee growth in private insurance spending has been 1.1% per year since 2010, compared with a mean of 6.5% from 2000 through 2005 and 3.4% from 2005 to 2010.1,40

Figure 4.
Rate of Change in Real per-Enrollee Spending by Payer

Data are derived from the National Health Expenditure Accounts.1 Inflation adjustments use the Gross Domestic Product Price Index reported in the National Income and Product Accounts.40 The mean growth rate for Medicare spending reported for 2005 through 2010 omits growth from 2005 to 2006 to exclude the effect of the creation of Medicare Part D.

Image not available.

As a result, health care spending is likely to be far lower than expected. For example, relative to the projections the Congressional Budget Office (CBO) issued just before I took office, CBO now projects Medicare to spend 20%, or about $160 billion, less in 2019 alone.41,42 The implications for families’ budgets of slower growth in premiums have been equally striking. Had premiums increased since 2010 at the same mean rate as the preceding decade, the mean family premium for employer-based coverage would have been almost $2600 higher in 2015.33 Employees receive much of those savings through lower premium costs, and economists generally agree that those employees will receive the remainder as higher wages in the long run.43 Furthermore, while deductibles have increased in recent years, they have increased no faster than in the years preceding 2010.44 Multiple sources also indicate that the overall share of health care costs that enrollees in employer coverage pay out of pocket has been close to flat since 2010 (Figure 545– 48), most likely because the continued increase in deductibles has been canceled out by a decline in co-payments.

Figure 5.
Out-of-pocket Spending as a Percentage of Total Health Care Spending for Individuals Enrolled in Employer-Based Coverage

Data for the series labeled Medical Expenditure Panel Survey (MEPS) were derived from MEPS Household Component and reflect the ratio of out-of-pocket expenditures to total expenditures for nonelderly individuals reporting full-year employer coverage. Data for the series labeled Health Care Cost Institute (HCCI) were derived from the analysis of the HCCI claims database reported in Herrera et al,45 HCCI 2015,46 and HCCI 201547; to capture data revisions, the most recent value reported for each year was used. Data for the series labeled Claxton et al were derived from the analyses of the Trueven Marketscan claims database reported by Claxton et al 2016.48

Image not available.

At the same time, the United States has seen important improvements in the quality of care. The rate of hospital-acquired conditions (such as adverse drug events, infections, and pressure ulcers) has declined by 17%, from 145 per 1000 discharges in 2010 to 121 per 1000 discharges in 2014.49 Using prior research on the relationship between hospital-acquired conditions and mortality, the Agency for Healthcare Research and Quality has estimated that this decline in the rate of hospital-acquired conditions has prevented a cumulative 87 000 deaths over 4 years.49 The rate at which Medicare patients are readmitted to the hospital within 30 days after discharge has also decreased sharply, from a mean of 19.1% during 2010 to a mean of 17.8% during 2015 (Figure 6; written communication; March 2016; Office of Enterprise Data and Analytics, Centers for Medicare & Medicaid Services). The Department of Health and Human Services has estimated that lower hospital readmission rates resulted in 565 000 fewer total readmissions from April 2010 through May 2015.50,51

Figure 6.
Medicare 30-Day, All-Condition Hospital Readmission Rate

Data were provided by the Centers for Medicare & Medicaid Services (written communication; March 2016). The plotted series reflects a 12-month moving average of the hospital readmission rates reported for discharges occurring in each month.

Image not available.

While the Great Recession and other factors played a role in recent trends, the Council of Economic Advisers has found evidence that the reforms introduced by the ACA helped both slow health care cost growth and drive improvements in the quality of care.44,52 The contribution of the ACA’s reforms is likely to increase in the years ahead as its tools are used more fully and as the models already deployed under the ACA continue to mature.

BUILDING ON PROGRESS TO DATE

I am proud of the policy changes in the ACA and the progress that has been made toward a more affordable, high-quality, and accessible health care system. Despite this progress, too many Americans still strain to pay for their physician visits and prescriptions, cover their deductibles, or pay their monthly insurance bills; struggle to navigate a complex, sometimes bewildering system; and remain uninsured. More work to reform the health care system is necessary, with some suggestions offered below.

First, many of the reforms introduced in recent years are still some years from reaching their maximum effect. With respect to the law’s coverage provisions, these early years’ experience demonstrate that the Health Insurance Marketplace is a viable source of coverage for millions of Americans and will be for decades to come. However, both insurers and policy makers are still learning about the dynamics of an insurance market that includes all people regardless of any preexisting conditions, and further adjustments and recalibrations will likely be needed, as can be seen in some insurers’ proposed Marketplace premiums for 2017. In addition, a critical piece of unfinished business is in Medicaid. As of July 1, 2016, 19 states have yet to expand their Medicaid programs. I hope that all 50 states take this option and expand coverage for their citizens in the coming years, as they did in the years following the creation of Medicaid and CHIP.

With respect to delivery system reform, the reorientation of the US health care payment systems toward quality and accountability has made significant strides forward, but it will take continued hard work to achieve my administration’s goal of having at least half of traditional Medicare payments flowing through alternative payment models by the end of 2018. Tools created by the ACA—including CMMI and the law’s ACO program—and the new tools provided by MACRA will play central roles in this important work. In parallel, I expect continued bipartisan support for identifying the root causes and cures for diseases through the Precision Medicine and BRAIN initiatives and the Cancer Moonshot, which are likely to have profound benefits for the 21st-century US health care system and health outcomes.

Second, while the ACA has greatly improved the affordability of health insurance coverage, surveys indicate that many of the remaining uninsured individuals want coverage but still report being unable to afford it.53,54 Some of these individuals may be unaware of the financial assistance available under current law, whereas others would benefit from congressional action to increase financial assistance to purchase coverage, which would also help middle-class families who have coverage but still struggle with premiums. The steady-state cost of the ACA’s coverage provisions is currently projected to be 28% below CBO’s original projections, due in significant part to lower-than-expected Marketplace premiums, so increased financial assistance could make coverage even more affordable while still keeping federal costs below initial estimates.55,56

Third, more can and should be done to enhance competition in the Marketplaces. For most Americans in most places, the Marketplaces are working. The ACA supports competition and has encouraged the entry of hospital-based plans, Medicaid managed care plans, and other plans into new areas. As a result, the majority of the country has benefited from competition in the Marketplaces, with 88% of enrollees living in counties with at least 3 issuers in 2016, which helps keep costs in these areas low.57,58 However, the remaining 12% of enrollees live in areas with only 1 or 2 issuers. Some parts of the country have struggled with limited insurance market competition for many years, which is one reason that, in the original debate over health reform, Congress considered and I supported including a Medicare-like public plan. Public programs like Medicare often deliver care more cost-effectively by curtailing administrative overhead and securing better prices from providers.59,60 The public plan did not make it into the final legislation. Now, based on experience with the ACA, I think Congress should revisit a public plan to compete alongside private insurers in areas of the country where competition is limited. Adding a public plan in such areas would strengthen the Marketplace approach, giving consumers more affordable options while also creating savings for the federal government.61

Fourth, although the ACA included policies to help address prescription drug costs, like more substantial Medicaid rebates and the creation of a pathway for approval of biosimilar drugs, those costs remain a concern for Americans, employers, and taxpayers alike—particularly in light of the 12% increase in prescription drug spending that occurred in 2014.1 In addition to administrative actions like testing new ways to pay for drugs, legislative action is needed.62 Congress should act on proposals like those included in my fiscal year 2017 budget to increase transparency around manufacturers’ actual production and development costs, to increase the rebates manufacturers are required to pay for drugs prescribed to certain Medicare and Medicaid beneficiaries, and to give the federal government the authority to negotiate prices for certain high-priced drugs.63

There is another important role for Congress: it should avoid moving backward on health reform. While I have always been interested in improving the law—and signed 19 bills that do just that—my administration has spent considerable time in the last several years opposing more than 60 attempts to repeal parts or all of the ACA, time that could have been better spent working to improve our health care system and economy. In some instances, the repeal efforts have been bipartisan, including the effort to roll back the excise tax on high-cost employer-provided plans. Although this provision can be improved, such as through the reforms I proposed in my budget, the tax creates strong incentives for the least-efficient private-sector health plans to engage in delivery system reform efforts, with major benefits for the economy and the budget. It should be preserved.64 In addition, Congress should not advance legislation that undermines the Independent Payment Advisory Board, which will provide a valuable backstop if rapid cost growth returns to Medicare.

LESSONS FOR FUTURE POLICY MAKERS

While historians will draw their own conclusions about the broader implications of the ACA, I have my own. These lessons learned are not just for posterity: I have put them into practice in both health care policy and other areas of public policy throughout my presidency.

The first lesson is that any change is difficult, but it is especially difficult in the face of hyperpartisanship. Republicans reversed course and rejected their own ideas once they appeared in the text of a bill that I supported. For example, they supported a fully funded risk-corridor program and a public plan fallback in the Medicare drug benefit in 2003 but opposed them in the ACA. They supported the individual mandate in Massachusetts in 2006 but opposed it in the ACA. They supported the employer mandate in California in 2007 but opposed it in the ACA—and then opposed the administration’s decision to delay it. Moreover, through inadequate funding, opposition to routine technical corrections, excessive oversight, and relentless litigation, Republicans undermined ACA implementation efforts. We could have covered more ground more quickly with cooperation rather than obstruction. It is not obvious that this strategy has paid political dividends for Republicans, but it has clearly come at a cost for the country, most notably for the estimated 4 million Americans left uninsured because they live in GOP-led states that have yet to expand Medicaid.65

The second lesson is that special interests pose a continued obstacle to change. We worked successfully with some health care organizations and groups, such as major hospital associations, to redirect excessive Medicare payments to federal subsidies for the uninsured. Yet others, like the pharmaceutical industry, oppose any change to drug pricing, no matter how justifiable and modest, because they believe it threatens their profits.66 We need to continue to tackle special interest dollars in politics. But we also need to reinforce the sense of mission in health care that brought us an affordable polio vaccine and widely available penicillin.

The third lesson is the importance of pragmatism in both legislation and implementation. Simpler approaches to addressing our health care problems exist at both ends of the political spectrum: the single-payer model vs government vouchers for all. Yet the nation typically reaches its greatest heights when we find common ground between the public and private good and adjust along the way. That was my approach with the ACA. We engaged with Congress to identify the combination of proven health reform ideas that could pass and have continued to adapt them since. This includes abandoning parts that do not work, like the voluntary long-term care program included in the law. It also means shutting down and restarting a process when it fails. When HealthCare.gov did not work on day 1, we brought in reinforcements, were brutally honest in assessing problems, and worked relentlessly to get it operating. Both the process and the website were successful, and we created a playbook we are applying to technology projects across the government.

While the lessons enumerated above may seem daunting, the ACA experience nevertheless makes me optimistic about this country’s capacity to make meaningful progress on even the biggest public policy challenges. Many moments serve as reminders that a broken status quo is not the nation’s destiny. I often think of a letter I received from Brent Brown of Wisconsin. He did not vote for me and he opposed “ObamaCare,” but Brent changed his mind when he became ill, needed care, and got it thanks to the law.67 Or take Governor John Kasich’s explanation for expanding Medicaid: “For those that live in the shadows of life, those who are the least among us, I will not accept the fact that the most vulnerable in our state should be ignored. We can help them.”68 Or look at the actions of countless health care providers who have made our health system more coordinated, quality-oriented, and patient-centered. I will repeat what I said 4 years ago when the Supreme Court upheld the ACA: I am as confident as ever that looking back 20 years from now, the nation will be better off because of having the courage to pass this law and persevere. As this progress with health care reform in the United States demonstrates, faith in responsibility, belief in opportunity, and ability to unite around common values are what makes this nation great.

ARTICLE INFORMATION

Corresponding Author: Barack Obama, JD, The White House, 1600 Pennsylvania Ave NW, Washington, DC 20500 (press@who.eop.gov).

Published Online: July 11, 2016. doi:10.1001/jama.2016.9797.

Conflict of Interest Disclosures: The author has completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. The author’s public financial disclosure report for calendar year 2015 may be viewed at https://www.whitehouse.gov/sites/whitehouse.gov/files/documents/oge_278_cy_2015_obama_051616.pdf.

Additional Contributions: I thank Matthew Fiedler, PhD, and Jeanne Lambrew, PhD, who assisted with planning, writing, and data analysis. I also thank Kristie Canegallo, MA; Katie Hill, BA; Cody Keenan, MPP; Jesse Lee, BA; and Shailagh Murray, MS, who assisted with editing the manuscript. All of the individuals who assisted with the preparation of the manuscript are employed by the Executive Office of the President.

REFERENCES

1
Centers for Medicare & Medicaid Services. National Health Expenditure Data: NHE tables.https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/NationalHealthExpendData/NationalHealthAccountsHistorical.html. Published December 3, 2015. Accessed June 14, 2016.
2
Anderson  GF, Frogner  BK.  Health spending in OECD countries: obtaining value per dollar. Health Aff (Millwood). 2008;27(6):1718-1727.
PubMed   |  Link to Article
3
Fisher  ES, Wennberg  DE, Stukel  TA, Gottlieb  DJ, Lucas  FL, Pinder  EL.  The implications of regional variations in Medicare spending: part 1: the content, quality, and accessibility of care. Ann Intern Med. 2003;138(4):273-287.
PubMed   |  Link to Article
4
Fisher  ES, Wennberg  DE, Stukel  TA, Gottlieb  DJ, Lucas  FL, Pinder  EL.  The implications of regional variations in Medicare spending: part 2: health outcomes and satisfaction with care. Ann Intern Med. 2003;138(4):288-298.
PubMed   |  Link to Article
5
McGlynn  EA, Asch  SM, Adams  J,  et al.  The quality of health care delivered to adults in the United States. N Engl J Med. 2003;348(26):2635-2645.
PubMed   |  Link to Article
6
Commonwealth Fund. Why not the best? Results from the National Scorecard on US Health System Performance, 2008. http://www.commonwealthfund.org/publications/fund-reports/2008/jul/why-not-the-best–results-from-the-national-scorecard-on-u-s–health-system-performance–2008. Published July 1, 2008. Accessed June 14, 2016.
7
Cohen  RA, Martinez  ME, Zammitti  EP. Early release of selected estimates based on data from the National Health Interview Survey, 2015. National Center for Health Statistics.http://www.cdc.gov/nchs/nhis/releases/released201605.htm. Published May 24, 2016. Accessed June 14, 2016.
8
Cohen  RA, Makuc  DM, Bernstein  AB, Bilheimer  LT, Powell-Griner  E. Health insurance coverage trends, 1959-2007: estimates from the National Health Interview Survey. National Center for Health Statistics.http://www.cdc.gov/nchs/data/nhsr/nhsr017.pdf. Published July 1, 2009. Accessed June 14, 2016.
9
Cohen  RA. Trends in health care coverage and insurance for 1968-2011. National Center for Health Statistics.http://www.cdc.gov/nchs/health_policy/trends_hc_1968_2011.htm. Published November 15, 2012. Accessed June 14, 2016.
10
Council of Economic Advisers. Methodological appendix: methods used to construct a consistent historical time series of health insurance coverage.https://www.whitehouse.gov/sites/default/files/docs/longtermhealthinsuranceseriesmethodologyfinal.pdf. Published December 18, 2014. Accessed June 14, 2016.
11
Baicker  K, Taubman  SL, Allen  HL,  et al; Oregon Health Study Group.  The Oregon experiment: effects of Medicaid on clinical outcomes. N Engl J Med. 2013;368(18):1713-1722.
PubMed   |  Link to Article
12
Sommers  BD, Baicker  K, Epstein  AM.  Mortality and access to care among adults after state Medicaid expansions. N Engl J Med. 2012;367(11):1025-1034.
PubMed   |  Link to Article
13
Sommers  BD, Long  SK, Baicker  K.  Changes in mortality after Massachusetts health care reform: a quasi-experimental study. Ann Intern Med. 2014;160(9):585-593.
PubMed   |  Link to Article
14
Hadley  J, Holahan  J, Coughlin  T, Miller  D.  Covering the uninsured in 2008: current costs, sources of payment, and incremental costs. Health Aff (Millwood). 2008;27(5):w399-w415.
PubMed   |  Link to Article
15
Fairlie  RW, Kapur  K, Gates  S.  Is employer-based health insurance a barrier to entrepreneurship? J Health Econ. 2011;30(1):146-162.
PubMed   |  Link to Article
16
Dillender  M.  Do more health insurance options lead to higher wages? evidence from states extending dependent coverage. J Health Econ. 2014;36:84-97.
PubMed   |  Link to Article
17
Lee  J. “I’m here because of Natoma.” The White House. https://www.whitehouse.gov/blog/2010/03/15/im-here-because-natoma-0. Published March 15, 2010. Accessed June 20, 2016.
18
Centers for Disease Control and Prevention. Trends in the prevalence of tobacco use: National YRBS: 1991–2015.http://www.cdc.gov/healthyyouth/data/yrbs/pdf/trends/2015_us_tobacco_trend_yrbs.pdf. Updated June 9, 2016. Accessed June 14, 2016.
19
Oberlander  J.  Long time coming: why health reform finally passed. Health Aff (Millwood). 2010;29(6):1112-1116.
PubMed   |  Link to Article
20
Courtemanche  C, Marton  J, Ukert  B, Yelowtize  A, Zapata  D. Impacts of the Affordable Care Act on health insurance coverage in Medicaid expansion and non-expansion states [NBER working paper No. 22182]. National Bureau of Economic Research. http://www.nber.org/papers/w22182. Published April 2016. Accessed June 14, 2016.
21
Blumberg  LJ, Garrett  B, Holahan  J.  Estimating the counterfactual: how many uninsured adults would there be today without the ACA? Inquiry. 2016;53(3):1-13.
PubMed
22
Uberoi  N, Finegold  K, Gee  E. Health insurance coverage and the Affordable Care Act, 2010-2016. Office of the Assistant Secretary for Planning and Evaluation, US Department of Health and Human Services.https://aspe.hhs.gov/sites/default/files/pdf/187551/ACA2010-2016.pdf. Published March 3, 2016. Accessed June 14, 2016.
23
Witters  D. Arkansas, Kentucky set pace in reducing uninsured rate. Gallup.http://www.gallup.com/poll/189023/arkansas-kentucky-set-pace-reducing-uninsured-rate.aspx. Published February 4, 2016. Accessed June 14, 2016.
24
Sommers  BD, Gunja  MZ, Finegold  K, Musco  T.  Changes in self-reported insurance coverage, access to care, and health under the Affordable Care Act. JAMA. 2015;314(4):366-374.
PubMed   |  Link to Article
25
Shartzer  A, Long  SK, Anderson  N.  Access to care and affordability have improved following Affordable Care Act implementation; problems remain. Health Aff (Millwood). 2016;35(1):161-168.
PubMed   |  Link to Article
26
Dussault  N, Pinkovskiy  M, Zafar  B. Is health insurance good for your financial health? Federal Reserve Bank of New York. http://libertystreeteconomics.newyorkfed.org/2016/06/is-health-insurance-good-for-your-financial-health.html. Published June 6, 2016. Accessed June 14, 2016.
27
Hu  L, Kaestner  R, Mazumder  B, Miller  S, Wong  A. The effect of the Patient Protection and Affordable Care Act Medicaid expansions on financial well-being [NBER working paper No. 22170]. National Bureau of Economic Research. http://www.nber.org/papers/w22170. Published April 2016. Accessed June 14, 2016.
28
Bureau of Labor Statistics. Employment, hours, and earnings from the Current Employment Statistics survey (national): Series ID CES0500000001. http://data.bls.gov/timeseries/CES0500000001. Accessed June 14, 2016.
29
Kaestner  R, Garrett  B, Gangopadhyaya  A, Fleming  C. Effects of the ACA Medicaid expansions on health insurance coverage and labor supply [NBER working paper No. 21836]. National Bureau of Economic Research.http://www.nber.org/papers/w21836. Published December 2015. Accessed June 14, 2016.
30
Pinkovskiy  M. The Affordable Care Act and the labor market: a first look. Federal Reserve Bank of New York Staff Reports No. 746. https://www.newyorkfed.org/medialibrary/media/research/staff_reports/sr746.pdf. Published October 2015. Accessed June 14, 2016.
31
Office of the Assistant Secretary for Planning and Evaluation, US Department of Health and Human Services. Essential health benefits: individual market coverage. https://aspe.hhs.gov/basic-report/essential-health-benefits-individual-market-coverage. Published December 16, 2011. Accessed June 20, 2016.
32
Simmons  A, Taylor  J, Finegold  K, Yabroff  R, Gee  E, Chappel  E. The Affordable Care Act: promoting better health for women. Office of the Assistant Secretary for Planning and Evaluation, US Department of Health and Human Services. https://aspe.hhs.gov/pdf-report/affordable-care-act-promoting-better-health-women. Published June 14, 2016. Accessed June 18, 2016.
33
Claxton  G, Rae  M, Long  M,  et al. Employer health benefits: 2015 annual survey. The Henry J. Kaiser Family Foundation. http://files.kff.org/attachment/report-2015-employer-health-benefits-survey. Published September 22, 2015. Accessed June 14, 2016.
34
Centers for Medicare & Medicaid Services. More than 10 million people with Medicare have saved over $20 billion on prescription drugs since 2010 [news release]. https://www.cms.gov/Newsroom/MediaReleaseDatabase/Press-releases/2016-Press-releases-items/2016-02-08.html. Published February 8, 2016. Accessed June 14, 2016.
35
White  C.  Contrary to cost-shift theory, lower Medicare hospital payment rates for inpatient care lead to lower private payment rates. Health Aff (Millwood). 2013;32(5):935-943.
PubMed   |  Link to Article
36
Clemens  J, Gottlieb  JD. In the shadow of a giant: Medicare’s influence on private payment systems.http://www.joshuagottlieb.ca/ShadowOfAGiant.pdf. Accessed June 29, 2016.
37
US Department of Health and Human Services. HHS reaches goal of tying 30 percent of Medicare payments to quality ahead of schedule [news release]. http://www.hhs.gov/about/news/2016/03/03/hhs-reaches-goal-tying-30-percent-medicare-payments-quality-ahead-schedule.html. Published March 3, 2016. Accessed June 14, 2016.
38
Muhlestein  D. Growth and dispersion of accountable care organizations in 2015. Health Affairs Blog.http://healthaffairs.org/blog/2015/03/31/growth-and-dispersion-of-accountable-care-organizations-in-2015-2/. Published March 3, 2016. Accessed June 14, 2016.
39
Board of Trustees of the Federal Hospital Insurance and Federal Supplementary Medical Insurance Trust Funds.2015 Annual Report of the Board of Trustees of the Federal Hospital Insurance and Federal Supplementary Medical Insurance Trust Funds. Washington, DC: Centers for Medicare & Medicaid Services; 2015.
40
Bureau of Economic Analysis. Table 1.1.4. Price indexes for gross domestic product.http://www.bea.gov/iTable/index_nipa.cfm. Accessed June 14, 2016.
41
Congressional Budget Office. The budget and economic outlook: fiscal years 2009 to 2019.https://www.cbo.gov/publication/41753. Published January 7, 2009. Accessed June 14, 2016.
42
Congressional Budget Office. Updated budget projections: 2016 to 2026. https://www.cbo.gov/publication/51384. Published March 24, 2016. Accessed June 14, 2016.
43
Congressional Budget Office. Private Health Insurance Premiums and Federal Policy. Washington, DC: Congressional Budget Office; 2016.
44
Furman  J. Next steps for health care reform. White House.https://www.whitehouse.gov/sites/default/files/page/files/20151007_next_steps_health_care_reform.pdf. Published October 7, 2015. Accessed June 14, 2016.
45
Herrera  CN, Gaynor  M, Newman  D, Town  RJ, Parente  ST.  Trends underlying employer-sponsored health insurance growth for Americans younger than age sixty-five. Health Aff (Millwood). 2013;32(10):1715-1722.
PubMed   |  Link to Article
46
Health Care Cost Institute. Out-of-pocket spending trends (2013). http://www.healthcostinstitute.org/issue-brief-out-pocket-spending-trends-2013. Published October 2014. Accessed June 14, 2016.
47
Health Care Cost Institute. 2014 Health care cost and utilization report. http://www.healthcostinstitute.org/2014-health-care-cost-and-utilization-report. Published October 2015. Accessed June 14, 2016.
48
Claxton  C, Levitt  L, Long  M. Payments for cost sharing increasing rapidly over time. Peterson-Kaiser Health System Tracker. http://www.healthsystemtracker.org/insight/payments-for-cost-sharing-increasing-rapidly-over-time/. Published April 12, 2016. Accessed June 14, 2016.
49
Agency for Healthcare Research and Quality. Saving lives and saving money: hospital-acquired conditions update.http://www.ahrq.gov/professionals/quality-patient-safety/pfp/interimhacrate2014.html. Updated December 2015. Accessed June 14, 2016.
50
Zuckerman  R. Reducing avoidable hospital readmissions to create a better, safer health care system. US Department of Health and Human Services. http://www.hhs.gov/blog/2016/02/24/reducing-avoidable-hospital-readmissions.html. Published February 24, 2016. Accessed June 14, 2016.
51
Zuckerman  RB, Sheingold  SH, Orav  EJ, Ruhter  J, Epstein  AM.  Readmissions, observation, and the Hospital Readmissions Reduction Program. N Engl J Med. 2016;374(16):1543-1551.
PubMed   |  Link to Article
52
Council of Economic Advisers. 2014 Economic Report of the President. Washington, DC: Council of Economic Advisers; 2014.
53
Shartzer  A, Kenney  GM, Long  SK, Odu  Y. A look at remaining uninsured adults as of March 2015. Urban Institute. http://hrms.urban.org/briefs/A-Look-at-Remaining-Uninsured-Adults-as-of-March-2015.html. Published August 18, 2016. Accessed June 14, 2016.
54
Undem  P. Understanding the uninsured now. Robert Wood Johnson Foundation.http://www.rwjf.org/en/library/research/2015/06/understanding-the-uninsured-now.html. Published June 2015. Accessed June 14, 2016.
55
Congressional Budget Office. HR 4872, Reconciliation Act of 2010 (Final Health Care Legislation).https://www.cbo.gov/publication/21351. Published March 20, 2010. Accessed June 14, 2016.
56
Congressional Budget Office. Federal subsidies for health insurance coverage for people under age 65: 2016 to 2026. https://www.cbo.gov/publication/51385. Published March 24, 2016. Accessed June 14, 2016.
57
Sheingold  S, Nguyen  N, Chappel  A. Competition and choice in the Health Insurance Marketplaces 2014-2015: impact on premiums. Office of the Assistant Secretary for Planning and Evaluation, US Department of Health and Human Services. https://aspe.hhs.gov/basic-report/competition-and-choice-health-insurance-marketplaces-2014-2015-impact-premiums. Published August 30, 2015. Accessed June 14, 2016.
58
Avery  K, Gardner  M, Gee  E, Marchetti-Bowick  E, McDowell  A, Sen  A. Health plan choice and premiums in the 2016 Health Insurance Marketplace. Office of the Assistant Secretary for Planning and Evaluation, US Department of Health and Human Services. https://aspe.hhs.gov/pdf-report/health-plan-choice-and-premiums-2016-health-insurance-marketplace. Published October 30, 2015. Accessed June 14, 2016.
59
Congressional Budget Office. Key issues in analyzing major health insurance proposals.https://www.cbo.gov/publication/41746. Published December 18, 2008. Accessed June 14, 2016.
60
Wallace  J, Song  Z.  Traditional Medicare versus private insurance: how spending, volume, and price change at age sixty-five. Health Aff (Millwood). 2016;35(5):864-872.
PubMed   |  Link to Article
61
Congressional Budget Office. Options for reducing the deficit: 2014 to 2023. https://www.cbo.gov/content/options-reducing-deficit-2014-2023. Published November 13, 2013. Accessed June 14, 2016.
62
Centers for Medicare & Medicaid Services. CMS proposes to test new Medicare Part B prescription drug models to improve quality of care and deliver better value for Medicare beneficiaries [news release].https://www.cms.gov/Newsroom/MediaReleaseDatabase/Fact-sheets/2016-Fact-sheets-items/2016-03-08.html. Published March 8, 2016. Accessed June 14, 2016.
63
Office of Management and Budget. Budget of the United States Government: Fiscal Year 2017. Washington, DC: Office of Management and Budget; 2016.
64
Furman  J, Fiedler  M.  The Cadillac tax: a crucial tool for delivery-system reform. N Engl J Med. 2016;374(11):1008-1009.
PubMed   |  Link to Article
65
Buettgens  M, Holahan  J, Recht  H. Medicaid expansion, health coverage, and spending: an update for the 21 states that have not expanded eligibility. http://kff.org/medicaid/issue-brief/medicaid-expansion-health-coverage-and-spending-an-update-for-the-21-states-that-have-not-expanded-eligibility/. Published April 29, 2015. Accessed June 14, 2016.
66
Karlin-Smith  S, Norman  B. Pharma unleashes on Part B demo. Politico.http://www.politico.com/tipsheets/prescription-pulse/2016/05/pharma-unleashes-on-part-b-demo-214193. Published May 9, 2016. Accessed June 29, 2016.
67
Garunay  M. Brent’s letter to the President: “You saved my life.” The White House.https://www.whitehouse.gov/blog/2016/03/03/brents-letter-president-you-saved-my-life. Published March 3, 2016. Accessed June 14, 2016.
68
Kasich  JR. 2013 State of the State Address. Lima: Ohio State Legislature; 2013.
Posted in Uncategorized | 6 Responses

Elsevier is tricking authors into surrendering their rights

A recent post on the GOAL mailing list by Heather Morrison alerted me to the following sneaky aspect of Elsevier’s “open access” publishing practices.

To put it simply, Elsevier have distorted the widely recognized concept of open access, in which authors retain copyright in their work and give others permission to reuse it, and where publishers are a vehicle authors use to distribute their work, into “Elsevier access” in which Elsevier, and not authors, retain all rights not granted by the license. As a result, despite highlighting the “fact” that authors retain copyright, they have ceded all decisions about how their work is used, if and when to pursue legal action for misuse of their work and, crucially, if they use a non-commercial license they are making Elsevier is the sole beneficiary of commercial reuse of their “open access” content.

For some historical context, when PLOS and BioMed Central launched open access journals over a decade ago, they adopted the use of Creative Commons licenses in which authors retain copyright in their work, but grant in advance the right for others to republish and use that work subject to restrictions that differ according to the license used. PLOS and BMC and most true open access publishers use the CC-BY license, whose only condition is that any reuse must be accompanied by proper attribution.

When PLOS, BioMed Central and other true open access publishers began to enjoy financial success, established subscription publishers like Elsevier began to see a business opportunity in open access publishing, and began offering a variety of “open access” options, where authors pay an article-processing charge in order to make their work available under one of several licenses. The license choices at Elsevier include CC-BY, but also CC-BY-NC (which does not allow commercial reuse) and a bespoke Elsevier license that is even more limiting (nobody else can reuse or redistribute these works).

At PLOS, authors do not need to transfer any rights to the publisher, since the agreement of authors to license their work under CC-BY grants PLOS (and anyone else) all the rights they need to publish the work. However, this is not true with more restrictive licenses like CC-BY-NC, which, by itself, does not give Elsevier the right to publish works. Thus, Elsevier if either CC-BY-NC or Elsevier’s own license are used, the authors have to grant publishing rights to Elsevier.

However, as Morrison points out, the publishing agreement that Elsevier open access authors sign is far more restrictive. Instead of just granting Elsevier the right to publish their work:

Authors sign an exclusive license agreement, where authors have copyright but license exclusive rights in their article to the publisher**. 

**This includes the right for the publisher to make and authorize commercial use, please see “Rights granted to Elsevier” for more details.

(Text from Elsevier’s page on Copyright).

This is not a subtle distinction. Elsevier and other publishers that offer it routinely push CC-BY-NC to authors under the premise that they don’t want to allow people to use their work for commercial purposes without their permission. Normally this would be the case with a work licensed under CC-BY-NC. But because exclusive rights to publish works licensed with CC-BY-NC are transferred to Elsevier, the company, and not the authors, are the ones who determine what commercial reuse is permissible. And, of course, it is Elsevier who profit from granting these rights.

It’s bad enough that Elsevier plays on misplaced fears of commercial reuse to convince authors not to grant the right to commercial reuse, which violates the spirit and goals of open access. But to convince people that they should retain the right to veto commercial reuses of their work, and then seize all those rights for themselves, is despicable.

 

Posted in open access | Comments closed

The Imprinter of All Maladies

Any sufficiently convoluted explanation for biological phenomena is indistinguishable from epigenetics.

Epigenetics

Use of the word “epigenetics” over time

Epigenetics is everywhere. Nary a day goes by without some news story or press release telling us something it explains.

Why does autism run in families?  Epigenetics.
Why do you have trouble losing weight? Epigenetics.
Why are vaccines dangerous? Epigenetics.
Why is cancer so hard to fight? Epigenetics.
Why a cure for cancer is around the corner? Epigenetics.
Why your parenting choices might affect your great-grandchildren? Epigenetics.

Epigenetics is used as shorthand in the popular press for any of a loosely connected set of phenomenon purported to result in experience being imprinted in DNA and transmitted across time and generations. Its place in our lexicon has grown as biochemical discoveries have given ideas of extra-genetic inheritance an air of molecular plausibility.

Biologists now invoke epigenetics to explain all manner of observations that lie outside their current ken. Epigenetics pops up frequently among non-scientists in all manner of discussions about heredity. And all manner of crackpots slap “epigenetics” on their fringy ideas to give them a veneer of credibility. But epigenetics has achieved buzzword status far faster and to a far larger extent than current science justifies, earning the disdain of scientists (like me) who study how information is encoded, transferred and read out across cellular and organismal generations.

This simmering conflict came to a head last week around an article in The New Yorker, Same but Different” by Siddhartha Mukherjee that juxtaposed a meditation on the differences between his mother and her identical twin with a discussion of the research of Rockefeller University’s David Allis on the biochemistry of DNA and the proteins that encapsulate it in cells, that he and others believe provides a second mechanism for the encoding and transmission of genetic information.

Although Mukherjee hedges throughout his piece, the clear implication of the story is that Allis’s work provides an explanation for differences that arise between genetically identical individuals, and even suggests that they open the door to legitimizing the long-discredited ideas of the 19th century naturalist Jean-Baptiste Lamarck who thought that organisms could pass beneficial traits acquired during their lifetimes on to their offspring.

The piece earned a sharp rebuke from many scientists, most notably Mark Ptashne who has long led the anti-epigenetics camp, and John Greally, who published a lengthy take-down of Mukherjee’s piece on the blog of evolutionary biologist Jerry Coyne.

The dispute centers on the process of gene regulation, wherein the levels of specific sets of genes are tuned to confer distinct properties on different sets of cells and tissues during development, and in response to internal and external stimuli. Gene regulation is central to the encoding of organismal form and function in DNA, as it allows different cells and even different individuals of a species to have identical DNA and yet manifest different phenotypes.

Ptashne has studied the molecular basis for gene regulation for fifty years. His and Greally’s critique of Mukherjee, or really Allis, is rather technical, and one could quibble about some of the specifics. But his main points are simple and difficult to refute:

  • There is essentially no evidence to support the idea that chemical modification of DNA and/or its accompanying proteins is used to encode and transmit information over long periods of time.
  • Rather than representing a separate system for storing and conveying information, a wide range of experiments suggests that the primary role of the biochemistry in question is to execute gene expression programs encoded in DNA and read out by a diverse set of proteins known as transcription factors that bind to specific sequences in DNA and regulate the expression of nearby genes.

In one way this debate is incredibly important because it is ultimately about getting the science right. Mukherjee’s piece contained several inaccurate statements and, by focusing on one aspect of Allis’s work, gave an woefully incomplete picture of our current understanding of gene regulation.

Any system for conveying information about the genome – which is what Mukherjee is writing about – has to have some way to achieve genomic specificity so that the expression of genes can be tuned up or down in a non-random manner. Transcription factors, which bind on to specific DNA sequences, provide a link between the specific sequence of DNA and the cellular machines responsible for turning information in DNA into proteins and other biomolecules. Small RNAs, which can bind to complementary sequences in DNA, also have this capacity.

But there is scant evidence for sequence specificity in the activities of the proteins that modify DNA and the nucleosomes around which it is wrapped. Rather they get their specificity from transcription factors and small RNAs. That doesn’t render this biochemistry unimportant – the broad conservation of proteins involved in modifying histones shows they play important roles – but ascribing regulatory primacy to DNA methylation and histone modifications is not consistent with our current understanding of gene regulation.

Something is, however, getting lost in this back-and-forth , as one might come away with the impression that this is disagreement about whether cells and organisms can transmit information in a manner above and beyond DNA sequence. And this is unfortunate, because there really is no question about this. Ptashne and Allis/Mukherjee are arguing about the molecular details of how it happens and about how important different phenomena are.

Various forms of non-Mendelian information transfer are well established. The most important of which happens in every animal generation, as eggs contain not only DNA from the mother, but also a wide range of proteins, RNAs and small molecules that drive the earliest stages of embryonic development. The particular cocktail left by the mother can have profound effects on the new organism – so called “maternal effects”. These effects can be the result of both the mothers genotype, the environment in which she lives, and, in various ways, her experiences during her life. (Such phenomena are not limited to multicellular critters – single-celled organisms distribute many molecules asymmetrically when they divide, conferring different phenotypes to their different genetically identical offspring).

Many maternal effects have been studied in great detail, and in most cases the transmission of state involves the transmission of different concentrations and activities of proteins (including transcription factors) and RNAs. That is the transmitted DNA is identical, but the state of the machinery that reads out the DNA is different, resulting in different outcomes.

However there are some good examples in which modifications to DNA play an important role in the transmission of information across generations – most notably with “imprinting”, in which an organism preferentially utilizes the copy of a gene it got from one of its parents independent to the exclusion of the other in a way that appears to be independent of the sequence of the gene. Imprinting, which is a relatively rare, but sometimes important, phenomenon appears to arise from parent-specific methylation of DNA.

Could the histone modifications that Allis studies and Mukherjee focuses on also carry information across cell divisions and generations? Sure. Our understanding of gene regulation is still fairly primitive, and there is plenty of room for the discovery of important inheritance mechanisms involving histone modification. We have to keep an open mind. But the point the critics of Mukherjee are really making is that given what is known today about mechanisms of gene regulation, it is bizarre bordering on irresponsible to focus on a mechanism of inheritance that only might be real.

And Mukherjee is far from the only one to have fallen into this trap. Which brings me to what I think is the most interesting question here: why does this particular type of epigenetic inheritance involving an obscure biochemical process have such strong appeal? I think there are several things going on.

First, the idea of a “histone code” that supersedes the information in DNA exists (at least for now) in a kind of limbo: enough biochemical specificity to give it credibility and a ubiquity that makes is seem important, but sufficient mystery about what it actually is and how it might work that people can imbue it with whatever properties they want. And scientists and non-scientists alike have leapt into this molecular biological sweet spot, using this manifestation of the idea of epigenetics as a generic explanation for things they can’t understand, a reason to hope that things they want to be true might really be, and as a difficult to refute, almost quasi-religious, argument for the plausibility of almost any idea linked to heredity.

But there is also something more specifically appealing about this particular idea. I think it stems from the fact that epigenetics in general, and the idea of a “histone code” in particular, provide a strong counterforce to the rampant genetic determinism that has dominated the genomic age. People don’t like to think that everything about the way they are and will be is determined by their DNA, and the idea that there is some magic wrapper around DNA that can be shaped by experience to override what is written in the primary code is quite alluring.

Of course DNA is not destiny, and we don’t need to evoke etchings on DNA to get out of it. But I have a feeling it will take more than a few arch retorts from transcription factor extremists to erase epigenetics from the zeitgeist.

Posted in epigenetics, gene regulation, My lab, science | Comments closed

PLOS, open access and scientific societies

Several people have noted that, in my previous post dealing with PLOS’s business, I didn’t address a point that came up in a number of threads regarding the relative virtues of PLOS and scientific societies – the basic point being that people should publish in society journals because they do good things with the money (run meetings, support fellowships and grants) and that PLOS is to be shunned because it “doesn’t give back to the community”.

 

I agree that many societies do good things to build and support their communities. But sponsoring meeting and fellowships is not the only way to give back to the community. PLOS was founded to make science publishing work better for scientists and the public, and we are singularly devoted to that goal. This means publishing open access journals that succeed as journals. This means demonstrating to a skeptical publishing and funding community that it’s possible to run a successful and stable business that published exclusively open access journals. This means working to change the way peer review works and the ways scientists are assessed. This means lobbying to promote laws and policies that increase access to the scientific literature.

Because of PLOS and other open access pioneers, around 20% of new papers are immediately available for people around the world to access without paywalls. PLOS’s success as a publisher has served as a model for other publishers and journals to adopt open access. PLOS’s promotion of open access and our lobbying helped make funder “public access” policies that make millions of papers freely available a reality. And PLOS is now working to promote instant publication, open peer review and other publishing changes that not only will make science more open, but get science out more quickly and make the ways we evaluate papers and each other more effective. This is what we give back to science. People are, of course, free not to value these things, to question whether PLOS’s role in these things was significant, or that we’ve achieved our goals and are no longer essential. But it’s ridiculous to say that PLOS doesn’t give back to the community just because we don’t sponsor meetings.

Now none of this should be construed as my saying people shouldn’t publish in society journals, provided they are open access of course. One of the reasons we started PLOS was because, back in the late 1990’s, most scientific societies rejected the idea that they could take advantage of the Internet’s power to make their work more widely available by using a different business model. We felt they were wrong, and one of PLOS’s main goals has always been to demonstrate that an open access business model could work for them – and I’m thrilled that in many cases this has work – see open access society journals like G3 and mBio, journals that I wholeheartedly and unambiguously support.

However, a lot of society journals – most – are not open access. And no matter how many meetings and fellowships the revenue from paywalled journals support, they are not worth it – I’ve yet to see a society whose good works were so good that they outweighed the harm of paywalling the scientific literature – using meetings as an excuse to paywall the scientific literature is completely unacceptable.

The reliance of so many societies on journal revenues has often made it hard to distinguish them from commercial publishers in their public stance on important issues in science publishing. You would think that, on first principles, scientific societies would support improving access to the scientific literature. Indeed several societies recognized this early on and pioneered open access and other open publishing business models before PLOS came along. However they are the exception. The most powerful societies have for decades not only been trading meetings for access to the literature, they have been using the profits they get from their journals to openly fight open access. Opposition from scientific societies was one of the major reasons for the scuttling of Harold Varmus’s 1999 eBioMed proposal, which would have created an NIH managed pre-print server with a full system of post-publication peer review. And for years major scientific societies were THE loudest voices on Capitol Hill arguing AGAINST the NIH public access policy and other moves for better access to the scientific literature.

Screen Shot 2016-03-16 at 9.23.33 AM

I also have long wondered whether it’s good for societies in a more general sense when they are reliant on publishing revenues for their funding. Societies are supposed to be organizations that represent their members, and yet the concept of being a member of a society has been weakened by the fact that few people actively choose to become a member of a society to support their activities and have a voice in their policies. Rather people become society members because it gets them access to journals and/or discounts to meetings. I love the Genetics Society of America, but they and many other societies do this weird thing where, if you go to one of their meetings, the cost of attending the meeting as a non-member is greater than the cost of attending as a member plus the cost of membership, so of course everyone “joins” the society. But this kind of membership is weak. And I wonder whether people wouldn’t feel more engaged in their societies, and if societies wouldn’t be more responsive to their members, if they became true membership organizations once again.

Finally, I want to return to the issue of finances. One of the threads in Andy Kern’s series of Tweets about PLOS finances that triggered this series of posts was his surprise that PLOS had margins of ~20% and had ~$25m in assets. In response I encouraged him to look at the finances of scientific societies. I think it’s good that Andy has triggered a conversation about PLOS’s finances – most people are unaware of how the publishing business works – something that’s important if we’re going to change it for the better. And similarly I think it would be great to learn more about the finances of the scientific societies that people support – most of whom not only file required Form 990s, but also offer more detailed financial reports. Some of the stuff you find is disturbing (like the fact that the American Chemical Society, long one of the fiercest opponents of open access, is sitting on $1.5b in assets) but most of it is just enlightening. I’ve compiled a list of Form 990s from the member societies of FASEB, and will be adding more information in the coming days.

 

Posted in open access, PLoS | Comments closed

On pastrami and the business of PLOS

Last week my friend Andy Kern (a population geneticist at Rutgers) went on a bit of a bender on Twitter prompted by his discovery of PLOS’s IRS Form 990 – the annual required financial filing of non-profit corporations in the United States. You can read his string of tweets and my responses, but the gist of his critique is this: PLOS pays its executives too much, and has an obscene amount of money in the bank.

Let me start by saying that I understand where his disdain comes from. Back when we were starting PLOS we began digging into the finances of the scientific societies that were fighting open access, and I was shocked to see how much money they were sitting on and how much their CEOs get paid. If I weren’t involved with PLOS, and I’d stumbled upon PLOS’s Form 990 now, I’d have probably raised a storm about it. I have absolutely no complaints about Andy’s efforts to understand what he was seeing – non-profits are required to release this kind of financial information precisely so that people can scrutinize what they are doing. And I understand why Andy and others find some of the info discomforting, and share some of his concerns. But having spent the last 15 years trying to build PLOS and turn it into a stable enterprise, I have a different perspective, and I’d like to explain it.

Let me start with something on which I agree completely with Andy completely, science publishing is way too expensive. Andy says he originally started poking into PLOS’s finances because he wanted to know where the $2,250 he was asked to pay to publish in PLOS Genetics went to, as this seemed like a lot of money to take a paper, have a volunteer academic serve as editor, find several additional volunteers to serve as peer reviewers, and then, if they accept the paper, turn it into a PDF and HTML version and publish it online. And he’s right. It is too much money.

That $2,250 is only about a third of the $6,000 a typical subscription journal takes in for every paper they publish, and that $6,000 buys access for only a tiny fraction of the world’s population, while the $2,250 buys it for everyone. But $2,250 is still too much, as is the $1,495 at PLOS ONE. I’ve always said that our goal should be to make it cost as little as possible to publish, and that our starting point should be $0 a paper.

The reality is, however, that it costs PLOS a lot more than $0 to handle a paper. We handle a lot of papers – close to 200 a day – each one different.  There’s a lot of manual labor involved in making sure the submission is complete, that it passes ethical and technical checks, in finding an editor and reviewers and getting them to handle the paper in a timely and effective manner. It then costs money to turn the collection of text and figures and tables into a paper, and to publish it and maintain a series of high-volume websites. All together we have a staff of well over 100 people running our journal operations, and they need to have office space, people to manage them, an HR system, an accounting system and so on – all the things a business has to have. And for better or worse our office is in San Francisco (remember that two of the three founders were in the Bay Area, and we couldn’t have started it anywhere else), which is a very expensive place to operate. We have always aimed to keep our article processing charges (APCs) as low as possible – it pains me every time we’ve had to raise our charges, since I think we should be working to eliminate APCs, not increase them. But we have to be realistic about what publishing costs us.

The difference in price between our journals reflects different costs. PLOS Biology and PLOS Medicine have professional editors handling each manuscript, so they’re intrinsically more expensive to operate. They also have relatively low acceptance rates, meaning a lot of staff time is spent on rejected papers, which generate no revenue. This is also the reason for the difference in price between our community journals like PLOS Genetics and PLOS ONE: the community journals reject more papers and thus we have to charge more per accepted paper. It might seem absurd to have people pay to reject other people’s papers, but if you think about it, that’s exactly what makes selective journals attractive – they have to publish your paper and reject lots of others. I’ve argued for a long time that we should do away with selective journals, but so long as people want to publish in them, they’re going to have this weird economics. And note this is not just true of open access journals – higher impact subscription journals bring in a lot more money per published paper than low impact subscription journals, for essentially the same reason.

Could PLOS do all these things more efficiently, more effectively and for less money? Absolutely. We, like most other big publishers, are using legacy software and systems to handle submissions, manage peer review and convert manuscripts into published papers. These systems are, for the most part, expensive, outdated and difficult or expensive (usually both) to customize. We are in a challenging situation since, until very recently, we weren’t in a position to develop our own systems for doing all these things, and we couldn’t just switch to cheaper or free system since they weren’t built to handle the volume of papers we deal with.

That said, it’s certainly possible to run journals much, much more cheaply. It costs the physics pre-print arXiv something like $10 a paper to maintain its software, screening and website. There are times when I wish PLOS had just hacked together a bunch of Perl scripts and hung out a shingle and built in new features as we needed them. But part of what made PLOS appealing at the start is that it didn’t work that way – for better or worse it looked like a real journal, and this was one of the things that made people comfortable with our (at the time) weird economic model. I’m not sure this is true anymore, and if I were starting PLOS today I would do things differently, and think I could do things much less expensively. I would love it if people would set up inexpensive or even free open access biology journals – it’s certainly possible with open source software and fully volunteer labor – and for people to get comfortable with biomedical publishing basically being no different than just posting work on the Internet, with lightweight systems for peer review. That has always seemed to me to be the right way to do things. But PLOS can’t just pull the plug on all the things we do, so we’re trying to achieve the same goal by investing in developing software that will make it possible to do all of the things PLOS does faster, better and cheaper. We’re going to start rolling it out this year, and, while I don’t run PLOS and can’t speak for the whole board, I am confident that this will bring our costs down significantly and that we will ultimately be in a position to reduce prices.

Which brings us to issue number two. Andy and a lot of other people took umbrage at the fact that PLOS has margins of 20% and has ~$25 million dollars in assets. Again, I understand why people look at these numbers and find them shocking – anything involving millions of dollars always seems like a lot of money. But this is a misconception. Both of these numbers represent nothing more than what is required for PLOS to be a stable enterprise.

I’ll start by reminding people that PLOS is still a relatively young company, working in a rapidly changing industry. Like most startups, it took a long time for PLOS to break even. For the first nine years of our existence we lost money every year, and were able to build our business only because we got strong support from foundations that believed in what we were doing. Finally, in 2011, we reached the point where we were taking in slightly more money than we were spending, allowing us to wean ourselves of foundation support. But we still had essentially no money in the bank, and that’s not a good thing. Good operating practices for any business dictate that the company have money in the bank to cover a downturn in revenue. This is particularly the case with open access publishers, since we have no guaranteed revenue stream – in contrast to subscription publishers who make long-term subscription deals. What’s more, this industry is changing rapidly, with the number of papers going to open access journals growing, but many new open access publishers entering the market. So it’s very hard for us to predict what our business is going to look like from year to year, while a lot of our expenses, like rent, software licenses and salaries, have to be paid before revenue they enable comes in. The only way to survive in this market is to have a decent amount of money in the bank to buffer against the unpredictable. If anything, I am told by people who spend their lives thinking about these things, we’re cutting things a little close. So, while 20% margins may seem like a lot, given our overall financial situation and the fact that we’ve been profitable for only five years, I think it’s actually a reasonable compromise between keeping costs as low as we can and ensuring that PLOS remains financially stable while also allowing us to make modest investments in technology that will make publishing better and cheaper in the long run.

Just to put these numbers in perspective for people who (like me) aren’t trained to think about these things, I had a look at the finances of a large set of scientific societies. I looked primarily at the members of FASEB, a federation of most of the major societies in molecular biology. Many of them have larger operating margins, and far larger cash reserves than PLOS. And I haven’t found one yet that doesn’t have a larger ratio of assets to expenses than PLOS does. And these are all organizations that have far more stable revenue streams than PLOS does. So I just don’t think it’s fair to suggest that either PLOS’s margins or reserves are untoward.

Indeed these numbers represent something important – that PLOS has become a successful business. I’ll once again remind people that one of the major knocks against open access when PLOS started was that we were a bunch of naive idealists (that’s the nicest way people put it) who didn’t understand what it took to run a successful business. Commercial publishers and societies alike argued repeatedly to scientists, funders and legislators that the only way to make money in science publishing was to use a subscription model. So it was absolutely critical to the success of the open access movement that PLOS not only succeed as a publisher, but that we also succeed as a business – to show the commercial and society publishers that their principal argument for why they refused to shift to open access was wrong. Having been the recipient of withering criticism – both personally and and as organization – about being too financially naive, it’s ironic and a bit mind boggling to all of a sudden be criticized for having created too good of a business.

Now despite that, I don’t want people to confuse my defense of PLOS’s business success with a defense of the business it’s engaged in. While I believe the APC/service business model PLOS has helped to develop is far far superior to the traditional subscription model, because it does not require paywalls, but I’ve never been comfortable with the APC business model in an absolute sense (and I recognize the irony of my saying that) because I wish science publishing weren’t a business at all. When we started PLOS the only way we had to make money was through APCs, but if I had my druthers we’d all just post papers online in a centralized server funded and run by a coalition of governments and funders, and scientists would use lightweight software to peer review published papers and organize the literature in useful ways. And no money would be exchanged in the process. I’m glad that PLOS is stable and has shown the world that the APC model can work, but I hope that we can soon move beyond it to a very different system.

Now I want to end on the issue that seemed to upset people the most – which is the salaries of PLOS’s executives. I am immensely proud of the executive team at PLOS – they are talented and dedicated. They make competitive salaries – and we’d have trouble hiring and retaining them if they didn’t. The board has been doing what we felt we had to do to build a successful company in the marketplace we live in – after all, we were founded to fix science publishing, not capitalism. But as an individual I can’t help but feel that’s a copout. The truth is the general criticism is right. A system where executives make so much more money that the staff they supervise isn’t just unfair, it’s ultimately corrosive. It’s something we all have to work to change, and I wish I’d done more to help make PLOS a model of this.

Finally, I want to acknowledge a tension evident in a lot of the discussion around this issue. Some of the criticism of PLOS – especially about margins and cash flow – have been just generally unfair. But others – about salaries and transparency – reflect something important. I think people understand that in these ways PLOS is just being a typical company. But we weren’t founded to just be a typical company – we were founded to be different and, yes, better, and people have higher expectations of us than they do a typical company. I want it to be that way. But PLOS was also not founded to fail – that would have been terrible for the push for openness in science publishing.I am immensely proud of PLOS’s success as a publisher, agent for change, and a business – and of all the people inside and outside of the organization who helped achieve it. Throughout PLOS’s history there were times we had to choose between abstract ideals and the reality of making PLOS a successful business, and I think, overall, we’ve done a good, but far from perfect, job of balancing this tension. And moving forward I personally pledge to do a better job of figuring out how to be successful while fully living up to those ideals.

 

Posted in open access, PLoS | Comments closed

Berkeley’s Handling of Sexual Harassment is a Disgrace

What more is there to say?

Another case where a senior member of the Berkeley faculty, this time Berkeley Law Dean Sujit Choudhry, was found to have violated the campus’s sexual harassment policy, and was given a slap on the wrists by the administration. Astronomer Geoff Marcy’s punishment for years of harassment of students was a talking to and a warning never to do it again, and now Choudhry was put on some kind of secret probation for a year, sent for additional training, and docked 10% of his meager $470,000 a year salary.

Despite a constant refrain from senior administrators that it takes cases of sexual harassment seriously, the administrations actions demonstrate that it does not. What is the point of having a sexual harassment policy if violations of it have essentially no sanctions? Through its responses to Marcy and Choudhry, it is now clear that the university views sexual harassment by its senior male faculty not as what it is – an inexcusable abuse of power that undermines the university’s entire mission and has a severe negative effect on our students and staff – but rather as a mistake that some faculty make because they don’t know better.

If the university wants to show that it is serious about ending sexual harassment on campus, then it has to take cases of sexual harassment seriously. This means being unambiguous about what is and is not acceptable behavior, and real consequences when people violate the rules. Faculty and administrators who engage in harassing behavior don’t do it by accident. They make a choice to engage in behavior they either know is wrong, or have no excuse for not knowing is wrong. And, at Berkeley at least, they do so knowing that if they get caught, the university will respond by saying “Bad boy. Don’t do that again. We’re watching you now.” Does anything think this is an actual deterrent?

Through its handling of the Marcy,  Choudhry and other cases, the Berkeley administration has shown utter contempt for the welfare of its students and staff. It has shown that it views its job not to create an optimal environment for education by ensuring that faculty behavior is consistent with the university’s mission, but rather to protect faculty, especially famous ones, from the consequences of their actions.

It is now clear that excuse making and wrist slapping in response to sexual harassment is so endemic in the Berkeley administration that it might as well be official policy. And just like there is no excuse for sexual harassing students and staff, there is no excuse for sanctioning this kind of the behavior. It’s time for the administrators – all of them – who have repeatedly failed the campus community on this issue to go. It’s the only way forward.

BerkeleyOrgChart

Posted in Uncategorized | Comments closed

I’m Excited! A Post Pre-Print-Posting-Powwow Post

I just got back from attending a meeting organized by a new group called ASAPbio whose mission is to promote the use of pre-prints in biology.

I should start by saying that I am a big believer in this mission. I have been working for two decades to convince biomedical researchers that the Internet can be more than a place to download PDFs from paywalled journal websites, and universal posting of pre-prints – or “immediate publication” as I think it should be known – is a crucial step towards the more effective use of the Internet in science communication. We should have done this 20 years ago, when the modern Internet was born, but better late than never.

There were reasons to be skeptical about this meeting. Change needs to happen on the ground not in conference halls – I have been to too many publishing meetings that involved a lot of great talks about the problems with publishing and how to fix them, but which didn’t amount to much because these calls weren’t translated into action. Second, the elite scientists, funders and publishers who formed the bulk of the invite-only ASAPbio attendees have generally been the least responsive to calls to reform biomedical publishing (I understand why this was the target group – while young, Internet-savvy scientists tend to be much more supportive in principle, they are reluctant to act because of fears about how it will affect their careers, and are looking towards the establishment to take the first steps). Finally, my new partner-in-crime Leslie Vosshall and I spent a lot of time and energy trying to rally support for pre-prints online leading up to the meeting, and it wasn’t like people were knocking down the doors to sign on to the cause.

However, I wouldn’t have kept at this for almost half my life it I wasn’t an eternal optimist, and I went into the meeting hoping, if not believing, that this time might be different. And I have to say I was pleasantly surprised. By the end of the meeting’s 24 hours it seemed like nearly everyone in attendance was sold on the idea that biomedical researchers should all post pre-prints of their work, and had already turned their attention to questions about how to do it. And there was a surprisingly little resistance to the idea that post-publication review of papers initially posted as pre-prints could, at least in principle, fulfill the functions that pre-publication review currently carries out. That’s not to say there weren’t concerns and even some objections – there were, as I will discuss below. But these were all dealt with to varying degrees, and there seemed to be a general attitude these concerns can be addressed, and did not constitute reasons not to proceed.

Honestly, I don’t think any new ideas emerged from the meeting. Everything that was discussed has been discussed and written about extensively before. But the purpose of the meeting was not to break new ground. Rather I think the organizers were trying to do three things (I’m projecting a bit here since I wasn’t one of the organizers):

  • To transfer knowledge from the small group of us who have been in the trenches of this movement to prominent members of the research community who are open to these ideas, but who hadn’t really ever given them much thought or attention
  • To make sure potential pitfalls and challenges of pre-prints were discussed. Although the meeting was dominated by members of the establishment, there were several young-PIs and postdocs, representatives of different fields and a few international participants, who raised a number of important issue and generally kept the meeting from becoming a self-congratulatory elite-fest.
  • To inspire everyone to act in tangible ways to promote pre-print use.

And I think the meeting was highly effective all three regards. For those of you who weren’t there and didn’t follow online or on video, here’s a rough summary of what happened (there are archived videos here).

The opening night was dominated by a keynote talk from Paul Ginsparg, who in 1991 started an online pre-print server for physics that is now the locus for the initial publishing of essentially all new work in physics, mathematics and some areas of computer science. Paul is a personal hero of mine – for what he did with arXiv and for just being a no bullshit advocate for sanity in science publishing – so I was bummed that he couldn’t make it person because of weather-related travel issues. But his appearance as a giant head on a giant screen by video-conference was a fitting representation for his giant place in pre-print history. His talk was very effective in squashing any of the typical gloom-and-doom about the end of quality science that often happens when pre-prints are discussed. A little bit of biology exceptionalism came up in the Q&A (“Yeah, it works for physics, but biology is different…”) but I thought Paul put most of those ideas to rest, especially the idea that all physics is done by giant groups working underground surrounded by large metal tubes.

The second day had two sessions, each structured around a series of a dozen or so five minute talks, followed by breakout sessions and then discussion. The morning focused on why people don’t use pre-prints – concerns about establishing priority, being able to publish in journals, getting jobs and funding – and how to address these concerns, while the afternoon sessions were about how to use pre-prints in evaluating papers and scientists and in finding and organizing published scientific information.

I can’t summarize everything that was discussed, but I have a lot of  thoughts on the meeting and where to go from here in no particular order:

I was surprised at how uncontroversial pre-prints were

Having watched the battles over Harold Varmus’ proposal to have biologists embrace pre-prints in 1999, and having taken infinite flak over the last 20 years for promoting a model of science communication based on immediate publication and post-publication peer review, I expected the idea that biologists should make their work initially available as pre-prints to be controversial. But it wasn’t. Essentially everyone at the meeting embraced the basic concept of pre-prints from the beginning, and we spent most of the meeting discussing details about how a pre-print system in biology can and should work, and how to build momentum for pre-print use.

I honestly don’t know how this happened. Pre-prints are close to invisible in biology (we didn’t really have a viable pre-print server until a year or so ago) and other recent efforts to promote pre-print usage in biology have been poorly received. There is lots of evidence from social media that most members of the community fall somewhere in the skeptical to hostile range when discussing pre-prints. Some of it is selection bias – people hostile to pre-prints weren’t likely to agree to come to a meeting on pre-prints that they (mostly) had to pay their own way to attend.

But I think it’s bigger than that. I think the publishing zeitgeist may have finally shifted. I’ve felt this way before, so I’m not sure I’m a good witness. But I think people are really ready for it this time. The signs were certainly there: after all Ron Vale, who organized ASAPbio, is no publishing radical – his publishing record is everything I’ve been trying to fight against for the last 20 years. But now he’s a convert, at least on pre-prints, and others are following suit. I don’t know whether it’s because all our work has finally paid off, or if it’s just time. The Internet has become so ingrained in our lives, maybe people finally realized how ridiculous it is that people all over the world could watch the ASAPbio meeting streaming live on their computers, but they have to wait months and months and months to be able to read about our latest science.

In the end I don’t really care why things seem to have changed. Even as I redouble my efforts to make sure this moment doesn’t elude us, I’m going to celebrate – this has been a long time coming.

Glamour journals remain a huge problem

One of the most shocking moments of this meeting came in a discussion right before the close about how to move forward to make pre-prints work. Marc Kirschner, a prominent cell biologist, made the suggestion that people at the meeting publish pre-prints of their papers at the time of submission so long as it is OK with the journal they plan to submit it to. I don’t think Kirschner was trying to set down some kind of abstract principle. Rather I think he was speaking to the reality that no matter how effectively we sell pre-prints, in the short run most scientists are still going to strive to put their work in the highest profile journals they can get them into; and we can make progress with pre-prints if we point out that a lot of journals people choose to publish in for other reasons allow them to post pre-prints and they should avail themselves of this opportunity.

This was the one time at the meeting where I lost my cool (a publishing meeting where I lose my cool only once is a first). It’s not that it surprises me that journals have this kind of hold on people. But I was still flabbergasted that after a meeting whose entire point was that it would be really good for science if people posted pre-prints, someone could suggest that we should give journals – not scientists – the power to decide whether pre-print posting is okay. And I couldn’t believe that people in the audience didn’t rise up in outrage at the most glaring and obvious example of how dysfunctional and toxic – one might even say dystopian – our relationship to journals is.

This is why I maintain my position – echoed by Vitek Tracz at the meeting, and endorsed by a handful of others – that science communication is never going to function optimally until we rid ourselves of the publish or reject paradigm employed by virtually all journals, and  until we and stop defining our success as scientists based on whether or not we could winkle our way into one of the uber-exclusive slots in glamorous journals. If anything is going to stop the move towards pre-prints, it’s going to be our proclivity for “glamor humping” (as blogger DrugMonkey has aptly dubbed this phenomenon). And if anything has the power to undermine the benefits of pre-prints, it’s if we allow this mentality to dominate in the post-journal world.

People have weird views of priority

One of the few new things I learned at this meeting is how obsessed a large number of people are with technical definitions of priority. We spent 30 minutes talking about whether pre-prints should count in establishing priority for discoveries. First of all, I can’t believe there’s any question about this – of course they should! But more importantly who thinks that questions of priority actually get decided by carefully scrutinizing who published what, when and on what date? It’s a lovely scholarly ideal to imagine that there’s some kind of court of science justice where hearings are held on every new idea or discovery, and a panel of judges looks at everything that’s been published or said about the idea is presented, and they then rule on who really was the first to publish, or present, the idea/discovery in a sufficiently complete form to get credit for it.

But I got news for all the people counting submission dates on the head of a pin – outside of patent cases, where such courts really do exist, at least in theory, that ain’t the way it works. True priority is constantly losing out in the real world, where who you are, where you work, where you publish and how you sell yourself are often far more important than submission or publication dates in determining who gets credit (and its trappings) for scientific advances.

Cell Press has a horrible, but kind of sane, policy on pre-prints

One of the things that I think a lot of people coming to the meeting didn’t realize is that many journals are perfectly fine with people posting pre-prints of articles that are being considered by the journal. Some, like eLife, PLOSPeerJ and Genetics actively encourage it. Others, like EMBOPNASScience and all Nature journals unambiguously allow pre-print posting. On the flip side, journals from the American Chemical Society and some other publishers will not accept papers if they were posted as pre-prints. And then there’s Cell.

Cell‘s policy is, on the surface, hard to parse:

If you have questions about whether posting a manuscript or data that you plan to submit to this journal on an openly available preprint server or poster repository would affect consideration, we encourage you to contact an editor so that we may provide more specific guidance. In many cases, posting will be possible.

Fortunately, Emilie Marcus, CEO of Cell Press and Editor-in-Chief of Cell, was at the meeting to explain it to us. Her response was, and I’m paraphrasing but I think I’m capturing it correctly, is that they are happy to publish papers initially posted as pre-prints so long as the information in the paper had not already been noticed by people in the field. In other words, it’s ok to post pre-prints so long as nobody noticed the pre-print. That is, they are rather unambiguously not endorsing the point of pre-prints, which is to get your work out to the community more quickly and effectively.

This is a pretty cynical policy. Cell clearly wants to get credit for being down with pre-prints without actually sanctioning them. But I actually found Marcus’s explanation of the policy to make sense, in a way. She views Cell as a publisher, and, as such, its role is to make information public. If that information has already been successfully conveyed by other means, then the role of publisher is no longer required.

This is obviously a quaint view – Cell is technically a publisher, but it’s more important role is as a selector of research that it deems to be interesting and important. So I think it’s more appropriate to look at this as a business decision. In refusing to help make pre-prints a reality, Elsevier and Cell Press are acting as if they believe pre-prints are a threat to their bottom line. And they’re right. Because if pre-prints become universal, who in their right mind is going to subscribe to Cell?

Maybe the other journals that endorse pre-prints are banking on the symbiosis between pre-prints and journals that exists in physics being extended to biomedicine. In questions after his talk Ginsparg said that ~80% of papers published in the arXiv are ultimately published in a peer-reviewed journal. And these journals are almost exclusively subscription based. So why don’t libraries cancel these subscriptions? The optimistic answer (for those who like journals) is that libraries want to support the services journals provide and are willing to pay for them even if they’re not providing access to the literature. This may be true. But the money in physics publishing is a drop in the bucket compared to biomedicine, and I just can’t see libraries continuing to spend millions of dollars per year on subscriptions to journals that provide paywalled access to content that is freely available elsewhere. I could be wrong, of course, but it seems like Elsevier, who for all their flaws clearly know how to make money, in this case agrees with me.

I don’t know what effect the Cell policy will have in the short run. I’d like to think people who are supportive of pre-prints will think twice before sending a paper to Cell in the future because of this policy (of course I’d like it if they never considered Cell in the first place, but who am I kidding). But I suspect this is going to be a drag on the growth of pre-prints — how big a drag, I don’t know, but it’s something we’re probably going to have to work around.

There are a lot of challenges in building a fair and effective pre-print system

The position of young scientists on pre-prints is interesting. On the one hand, they have never scienced without the Internet, and are accustomed to being able to get access to information easily and quickly. On the other hand, they are afraid that the kinds of changes we are pushing will make their lives more difficult, and will make many of the pathologies in the current system worse, especially those biased against them, worse. Even those who have no reservations about the pre-prints and/or post-publication review, don’t feel like they’re in a position to lead the charge.

This is one of the biggest challenges we have moving forward. I have no doubt that science communication systems build around immediate publication and post-publication review can be better for both science and scientists. But that doesn’t mean they automatically will be better. Indeed, I share many of other’s concerns about turning science into an even bigger popularity contest than it already is; about making it easier for powerful scientists to reinforce their positions and thwart their less powerful competitors; about increasing the potency of biases the myriad biases that poison training, hiring, promotion and funding; about making the process of receiving feedback on your work even less pleasant and uncollegial than it already is; and about increasing the incentives for scientists to prioritize glamour over doing rigorous, high-quality and durable work.

I will write more elsewhere about these issues and how I think we should try to address them. But it is of paramount importance that everybody who is trying to promote the move to pre-prints and beyond, and who is building systems to do this, be mindful of all these risks and do everything in their power to make sure the new systems work for everyone in science. We have to remember that for every bigshot who opposes pre-prints because they want to preserve their ability to publish in Cell, there are hundreds of scientists who just want to preserve their ability to do science. If this latter group doesn’t believe that pre-print posting is good for them, we will not only fail to convince them to join us on this path, but we run the serious risk of making science worse than it already is. And that would be a disaster.

Will attendees of the meeting practice what they preached

Much of the focus of the meeting organizers was on getting people who attended the meeting to sign on to a series of documents expressing various types of commitment to promoting pre-prints in biomedicine (you can see these on the ASAPbio site). These documents are fairly strong, and I will sign them. But I’m sick of pledges. I’ve been down this path too many times before. People come to meetings, they sign a document saying they do all sorts of great stuff, and then they forget about it.

The only thing that matters to me is making sure that the people who attended the meeting and who seemed really energized about making pre-prints work start to put this enthusiasm into practice immediately. I look forward to quick, concrete action from funders. But the immediate goal of the scientists at the meeting or who support its goals must be to start posting pre-prints. This is especially true of prominent, senior scientists. There were four Nobelists at the meeting, many members of national academies, and other A-list scientists. It’s a small number of people in the grand scheme of things, but if these scientists demonstrate that they are really committed to making pre-prints by starting to post pre-prints in the next week (I suspect that most people at this level have a paper under review at all time). I am confident that their commitment is genuine – indeed some have already posted pre-prints from their labs since the meeting ended yesterday.

Obviously we don’t want pre-prints to be the domain of the scientific 1%. But we have to start somewhere, and if people who have nothing to lose won’t lead the way, then it will never happen. But it seems like they actually are leading the way. There’s tons more hard work to do, but let’s not miss this opportunity. The rainbow unicorn is watching.

ArcLive Rainbow Unicorn

 

Posted in open access, science | Tagged , , | Comments closed

The Villain of CRISPR

Eric LanderThere is something mesmerizing about an evil genius at the height of their craft, and Eric Lander is an evil genius at the height of his craft.

Lander’s recent essay in Cell entitled “The Heroes of CRISPR” is his masterwork, at once so evil and yet so brilliant that I find it hard not to stand in awe even as I picture him cackling loudly in his Kendall Square lair, giant laser weapon behind him poised to destroy Berkeley if we don’t hand over our patents.

This paper is the latest entry in Lander’s decades long assault on the truth. During his rise from math prodigy to economist to the de facto head of the public human genome project to member of Obama’s council of science advisors to director of the powerful Broad Institute, he has shown an unfortunate tendency to treat the truth as an obstacle that must be overcome on his way to global scientific domination. And when one of the world’s most influential scientists treats science’s most elemental and valuable commodity with such disdain the damage is incalculable.

CRISPR, for those of you who do not know, is an anti-viral immune system found in archaea and bacteria, that until a few years ago, was all but unknown outside the small group of scientists, mostly microbiologists, who had been studying it since its discovery a quarter century ago. Interest in CRISPR spiked in 2012 when a paper from colleagues of mine at Berkeley and their collaborators in Europe described a simple way to repurpose components of the CRISPR system of the bacterium Streptococcus pyogenes to cut DNA in a easily programmable manner.

Such capability had been long sought by biologists, as targeted DNA cleavage is the first step in gene editing – the ability to replace one piece of DNA in an organism’s genome with DNA engineered in the lab. This 2012 paper from Martin Jinek and colleagues was quickly joined by a raft of others applying the method in vivo, modifying and improving it in myriad ways, and utilizing its components for other purposes. Among the earliest was a paper from Le Cong and Fei Ann Ran working at Lander’s Broad Institute which described CRISPR-based gene editing in human and mouse cells.

Now, less than four years after breaking onto the gene-editing scene, virtually all molecular biology labs are either using, or planning to use, CRISPR in their research. And amidst this explosion of interest, fights have erupted over who deserves the accolades that usually follow such scientific advances, and who owns the patents on the use of CRISPR in gene editing.

The most high-profile of these battles pit Berkeley against the Broad Institute, although researchers from many other institutions made important contributions. Jinek’s work was carried out in the lab of Berkeley’s Jennifer Doudna, and in close collaboration with Emmanuelle Charpentier, now at the Max Planck Institute for Infection Biology in Berlin; while Cong and Ran were working under the auspices of the Broad’s Feng Zhang. Interestingly, the prizes for CRISPR have largely gone to Doudna and Charpentier, while, for now at least, the important patents are held by Zhang and the Broad. But this could all soon change.

There has been extensive speculation that CRISPR gene editing will earn Doudna and Charpentier a Nobel Prize, but there has been considerable lobbying for Zhang to join them (Nobel Prizes are, unfortunately, doled out to a maximum of three people). On the flip side, the Broad’s claim to the patent is under dispute, and is the subject a legal battle that could turn into one of the biggest and most important in biotechnology history.

I am, of course, not a disinterested party. I know Jennifer well and an thrilled that her work is getting such positive attention. I also stand to benefit professionally if the patents are awarded to Berkeley, as my department will get a portion of what are likely to be significant proceeds (I have no personal stake in any CRISPR-related patents or companies).

But I if I had my way, there would be no winner in either of these fights. The way prizes like the Nobel give disproportionate credit to a handful of individuals is an injustice to the way science really works. When accolades are given exclusively to only a few of the people who participated in an important discovery, it by necessity denies credit to countless other people who also deserve it. We should celebrate the long series of discoveries and inventions that brought CRISPR to the forefront of science, and all the people who participated in them, rather than trying to decide which three were the most important.

And, as I have long argued, I believe that neither Berkeley nor MIT should have patents on CRISPR, since it is a disservice to science and the public for academic scientists to ever claim intellectual property in their work.

Nonetheless, these fights are underway. Which beings us back to Dr. Lander. Although he had nothing to do with Zhang’s CRISPR work, as Director of the Broad Institute, he has taken a prominent role in promoting Zhang’s case for both prizes and patent. But rather than simply go head-to-head with Doudna and Charpentier, Lander has crafted an ingenious strategy that is as clever as it is dishonest (see Nathaniel Comfort’s fantastic “A Whig History of CRISPR” for more on this). Let’s look at the way Lander’s argument is crafted.

To start, Lander cleaves history into two parts – Before Zhang and After Zhang – defining the crucial event in the history of CRISPR to be the demonstration that CRISPR could be used for gene editing in human cells. This dividing line is made explicit in Figure 2 of his “Heroes” piece, which maps the history of CRISPR with circles representing key discoveries. The map is centered on a single blue dot in Cambridge, marking Zhang as the sole member of the group that carried out the “final step of biological engineering to enable genome editing”, while everyone who preceded him gets labeled as a green natural historian or red biochemist.

Screen Shot 2016-01-24 at 7.49.00 PM

(Note also how he distorted the map of the world so that the Broad lies almost perfectly in the center. What happened to Iceland and Greenland? How did Europe get so far south and so close to North America? And what happened to the rest of the world? Where’s Asia, for example? Shouldn’t there be a big blue circle in Seoul?)

While some lawyer might find this argument appealing, it is a scientifically absurd point of view. For the past decade, researchers, including Zhang, have been using proteins – zinc finger nucleases and TALENs – engineered to cut DNA in specific places to carry out genome editing in a variety of different systems. If there was a key step in bringing CRISPR to the gene editing party, it was the demonstration that its components could be used as a programmable nuclease, something that arose from a decade’s worth of investigation into how CRISPR systems work at the molecular level. Once you have that, the application to human cells, while not trivial, is obvious and straightforward.

The best analogy for me is the polymerase chain reaction (PCR) another vital technique in molecular biology that emerged from the convergence of several disparate lines of work over decades, and which gained prominence with the work of Kary Mullis, who demonstrated an efficient method for amplifying DNA sequences in vitro. Arguing that Zhang deserves singular credit for CRISPR gene editing is akin to arguing that whomever was the first to amplify human DNA using PCR should get full credit for its invention. (And I’ll note that the claim that Zhang was unambiguously the first to do this is questionable – see this and this for example).

I want to be clear that in arguing against giving exclusive credit to Zhang, I am not arguing for singular credit to go to any other single group, as I think this does not do justice to the way science works. But if you are going to engage in this kind of silliness, one should at least endeavor to do it honestly. The only reason one would ever argue that CRISPR credit should be awarded to the person who first deployed it in human cells is if you decided in advance that full credit should go to Zhang and you searched post facto for a reason to make this claim.

Even Lander seems to have sensed that he had to do more than just make a tenuous case for Zhang – he had to also tear down the case for Doudna and Charpentier. And this wasn’t going to be easy, since their paper preceded Zhang’s, and they were already receiving widespread credit in the biomedical community for being its inventors. Here is where his evil genius kicks in. Instead of taking Doudna and Charpentier on directly, he did something much more clever: he wrote a piece celebrating the people whose work had preceded and paralleled theirs.

This was an evil genius move for several reasons:

First, the people whose work Lander writes about really are deserving of credit for pioneered the study of CRISPR, and they really have been unfairly written out of the history in most stories in the popular and even scientific press. This established Lander as the good guy, standing up to defend the forgotten scientists, toiling in off-the-beaten-path places. And even though, in my experience, Doudna and Charpentier go out of their way to highlight this early work in their talks, Lander’s gambit makes them look complicit in the exclusion.

Second, by going into depth about the contributions of early CRISPR pioneers, Lander is able to almost literally write Doudna and Charpentier (and, for that matter, the groups of genome-editing pioneer George Church and Korean scientist Jin-Soo Kim, whose CRISPR work has also been largely ignored) out of this history. They are mentioned, of course, but everything about the way they are mentioned seems designed to minimize their contributions. They are given abbreviated biographies compared to the other scientists he discusses. And instead of highlighting the important advances in the Jinek paper, which were instrumental to Zhang’s work, Lander focuses instead on the work of Giedrius Gasiunas working in the lab of Virginijus Siksnys in Lithuania. Lander relates in detail how they had similar findings to Jinek and submitted their paper first, but struggled to get it published, suggesting later in the essay that it was Doudna and Charpentier’s savvy about the journal system, and not their science, that earned them credit for CRISPR.

The example of Gasuinas and Siksnys is a good one for showing how unfair the system we have for doling out credit, accolades and intellectual property in science can be. While Gasuinas did not combine the two RNA components of the CRISPR-Cas9 system into a single “guide RNA” as was done by Jinek – a trick used in most CRISPR applications – they demonstrated the ability to reprogram CRISPR-Cas9, and were clearly on the path to gene editing. And neither Jinek or Gasuinas’s work would have been possible without the whole body of CRISPR work that preceded them.

But the point of Lander’s essay is not to elevate Siksnys, it is, as is made clear by the single blue circle on the map, to enshrine Zhang. His history of CRISPR, while entertaining and informative, is a cynical ploy, meant to establish Lander’s bonafides as a defender of the little person, so that his duplicity in throwing Siksyns under the bus when he didn’t need him anymore wouldn’t be so transparent.

What is particularly galling about this whole thing, is that Lander has a long history of attempting to rewrite scientific history so that credit goes not to the forgotten little people, but to him and those in his inner circle. The most prominent example of this is the pitched battle for credit for sequencing the human genome, in which Lander time and time again tried to rewrite history to paint the public genome project, and his role in it, in the most favorable light. 

Indeed, far from being regarded as a defending of lesser known scientists, Lander is widely regarded as someone who plays loose with scientific history in the name of promoting himself and those around him. And “Heroes of CRISPR” is the apotheosis of this endeavor. The piece is an elaborate lie that organizes and twists history with no other purpose than to achieve Lander’s goals – to win Zhang a Nobel Prize and the Broad an insanely lucrative patent. It is, in its crucial moments, so disconnected from reality that it is hard to fathom how someone so brilliant could have written it.

It’s all too easy to brush this kind of thing aside. After all Lander is hardly the first scientist to twist the truth in the name of glory and riches. But what makes this such a tragedy for me is that, in so many ways, Lander represents the best of science. He is a mathematician turned biologist who has turned his attention to some of the most pressing problems in modern biomedicine. He has published smart and important things. As a mathematician turned biologist myself, it’s hard for me not to be more than a little proud that a math whiz has become the most powerful figure in modern biology. And while I don’t like his scientific style of throwing millions of dollars at every problem, he has built an impressive empire and empowered the careers of many smart and talented people whose work I greatly value and respect.

But science has a simple prime directive: to tell the truth. Nobody, no matter how powerful and brilliant they are is above it. And when the most powerful scientist on Earth treats the truth with such disdain, they become the greatest scientific villain of them all.

Posted in Berkeley, CRISPR, science, University of California | Comments closed

Coupling Pre-Prints and Post-Publication Peer Review for Fast, Cheap, Fair, and Effective Science Publishing

Leslie Vosshall and I have written the following white paper as a prelude to the upcoming ASAP Bio meeting in February aimed at promoting pre-print use in biomedicine. We would greatly value any comments, questions or concerns you have about the piece or what we are proposing.


[PDF Version]

Coupling Pre-Prints and Post-Publication Peer Review for Fast, Cheap, Fair, and Effective Science Publishing

Michael Eisen1,2 and Leslie B. Vosshall 3,4

1 Department of Molecular and Cell Biology and 2 Howard Hughes Medical Institute, University of California, Berkeley, CA. 3 Laboratory of Neurogenetics and Behavior and 4 Howard Hughes Medical Institute, The Rockefeller University, New York, NY.

mbeisen@berkeley.edu; leslie@rockefeller.edu

Scientific papers are the primary tangible and lasting output of a scientist. It is how we communicate our discoveries, and how we are evaluated for hiring, promotion, and prizes. The current system by which scientific papers are published predates the internet by several hundred years, and has changed little over centuries.

We believe that this system no longer serves the needs of scientists.

  1. It is slow. Manuscripts spend an average of nine months in peer review prior to publication, and reviewers increasingly demand more data and more experiments to endorse a paper for publication. These delays massively slow the dissemination of scientific knowledge.
  2. It is expensive. We spend $10 billion a year on science and medical journal publishing, over $6,000 per article, and increasingly these costs are coming directly from research grants.
  3. It is arbitrary. The current system of peer review is flawed. Excellent papers are rejected, and flawed papers are accepted. Despite this, journal name continues to be used as a proxy for the quality of the paper.
  4. It is inaccessible. Even with the significant efforts of the open-access publishing movement, the vast majority of scientific literature is not accessible without a subscription.

In view of these problems, we strongly support the goal of ASAP Bio to accelerate the online availability of biomedical research manuscripts. If all biomedical researchers posted copies of their papers when they were ready to share them, these four major pathologies in science publishing would be cured.

The goal of ASAP Bio to get funders and other stakeholders to endorse the adoption of pre-prints is laudable. But without fundamental reform in the way that peer review is carried out, the push for pre-prints will not succeed. An important additional goal for the meeting must therefore be for funders to endorse alternative mechanisms for carrying out peer review. Such mechanisms would operate outside of the traditional journal-based system and focus on assessing the quality, audience, and impact of work published exclusively as “pre-prints”. If structured properly, we anticipate that a new system of pre-print publishing coupled with post-publication peer review will replace traditional scientific publishing much as online user-driven reviews (Amazon, Yelp, Trip Advisor, etc.) have replaced publisher-driven metrics to assess quality (Consumer Reports, Zagat, Fodor’s, etc.).

In this white paper we explain why the adoption of pre-prints and peer review reform are inseparable, outline possible alternative peer review systems, and suggest concrete steps that research funders can take to leverage changes in peer review to successfully promote the adoption of pre-prints.

Pre-prints and journal-based peer review can not coexist

The essay by Ron Vale that led to the ASAP Bio meeting is premised on the idea that we should use pre-prints to augment the existing, journal-based system for peer review. In Vale’s model, biomedical researchers would post papers on pre-print servers and then submit them to traditional journals, which would review them as they do today, and ultimately publish those works they deem suitable for their journal.

There are many reasons why such a system would be undesirable – it would leave intact a journal system that is inefficient, ineffective, inaccessible, and expensive. But more proximally, there is simply no way for such a symbiosis between pre-prints and the existing journal system to work.

Pre-print servers for biomedicine, such as BioRxiv, run by the well-respected Cold Spring Harbor Press, now offer biomedical researchers the option to publish their papers immediately, at minimal cost. Yet biologists have been reluctant to make use of this opportunity because they have no incentive to do so, and in many cases have incentives not to. If we as a biomedical community want to promote the universal adoption of pre-prints, we have to do more than pay lip-service to the potential of pre-prints, we have to change the incentives that drive publishing decisions. And this means changing peer review.

Why are pre-prints and peer review linked? Scientists publish for two reasons: to communicate their work to their colleagues, and to get credit for it in hiring, promotion and funding. If publishing behavior were primarily driven by a desire to communicate, biomedical scientists would leap at the opportunity to post pre-prints, which make their work available to the widest possible audience at the earliest possible time at virtually no cost. That they do not underscores the reality that, for most biomedical researchers, decisions about how they publish are driven almost entirely by the impact of these decisions on their careers.

Pre-prints will be not be embraced by biomedical scientists until we stop treating them as “pre” anything, which suggests that a better “real” version is yet to come. Instead, pre-prints need to be accepted as formally published works. This can only happen if we first create and embrace systems to evaluate the quality and impact of, and appropriate audience for, these already published works.

But even if we are wrong, and pre-prints become the norm, we would still need to create an alternative to journal based peer review. If all, or even most, papers are available for free online, it is all but certain that libraries would begin to cut subscriptions and traditional journal publishing, which still relies almost exclusively on revenue from subscriptions, would no longer be economically viable.

Thus a belief in the importance of pre-print use in biomedicine requires the creation of an alternative system for assessing papers. We therefore suggest that the most important act for funders, universities, and other stakeholders is not to just endorse the use of pre-prints in biomedicine, but to endorse the development and use of a viable alternative to journal titles in the assessment of the quality, impact, and audience of works published exclusively as “pre-prints”.                                                                                                

Peer review for the Internet Age

The current journal-based peer review system attempts to assure the quality of published works; help readers find articles of import and interest to them; and assign value to individual works and the researchers who created them. Post-publication peer review of works initially published as pre-prints can not only replicate these services, but do it faster, cheaper and more effectively.

The primary justification for carrying out peer review prior to publication is that this prevents flawed works from seeing the light of day. Inviting a panel of two or three experts to assess the methods, reasoning, and presentation of the science in the paper, undoubtedly leads to many flaws being identified and corrected.

But any practicing scientist can easily point to deeply flawed papers that have made it through peer review in their field, even in supposedly high-profile journals. Yet even when flaws are identified, it rarely matters. In a world where journal title is the accepted currency of quality, a deeply flawed Science or Nature paper is still a Science or Nature paper.

Prepublication review was developed and optimized for printed journals, where space had to be rationed to balance the expensive acts of printing and shipping a journal. But today it is absurd to rely solely on the opinions of two or three reviewers, who may or may not be the best qualified to assess a paper, who often did not want to read the paper in the first place, who are acting under intense time pressure, and who are casting judgment at a fixed point in time, to be to sole arbiters of the validity and value of a work. Post-publication peer review of pre-prints is scientific peer review optimized for the Internet Age.

Beginning to experiment with systems for post-publication review now will hasten its development and acceptance, and is the quickest path to the universal posting of pre-prints. In the spirit of experimentation, we propose a possible system below.

A system for post-publication peer review

First, authors would publish un-reviewed papers on pre-print servers that screen them to remove spam and papers that fail to meet technical and ethical specifications, before making them freely available online. At this point peer review begins, proceeding along two parallel tracks.

Track 1: Organized review in which groups, such as scientific societies or self-assembling sets of researchers, representing fields or areas of interest arrange for the review of papers they believe to be relevant to researchers in their field. They could either directly solicit reviewers or invite members of their group to submit reviews, and would publish the results of these reviews in a standardized format. These groups would be evaluated by a coalition of funding agencies, libraries, universities, and other parties according to a set of commonly agreed upon standards, akin to the screening that is done for traditional journals at PubMed.

Track 2: Individually submitted reviews from anyone who has read the paper. These reviews would use the same format as organized reviews, and would, like organized reviews become part of the permanent record of the paper. Ideally, we want everyone who reads a paper carefully to offer their view of its validity, audience, and impact. To ensure that the system is not corrupted, individually submitted reviews would be screened for appropriateness, conflicts of interest, and other problems, and there would be mechanisms to adjudicate complaints about submitted reviews.

Authors would have the ability at any time to respond to reviews and to submit revised versions of their manuscript.

Such a system has many immediate advantages over our current system of pre-publication peer review. The amount of scrutiny a paper receives will scale with the level of interest in the paper. If a paper is read by thousands of people, many more than the three reviewers chosen by a journal are in a position to weigh in on its validity, audience, and importance. Instead of only evaluating papers at a single fixed point in time, the process of peer review would continue for the useful lifespan of the paper.

What about concerns about anonymity for reviewers? We believe that peer review works best when it is completely open and reviewers are identified. This both provides a disincentive to various forms of abuse, and allows readers to put the review in perspective. We also recognize that there are many scientists who would not feel comfortable expressing their honest opinions without the protection of anonymity. We therefore propose that reviews be allowed to remain anonymous provided that one of the groups defined in Track 1 above vouch for their lack of conflict and appropriate expertise. This strikes the right balance between providing anonymity to reviewers while protecting authors from anonymous attacks.

What about the concern of flawed papers being published, or being subject to misuse and misinterpretation while they are being reviewed? We do not consider this to be a serious problem. The people in the best position to make use of immediate access to published papers – practicing scientists in the field of the paper – are in the best position to judge the validity of the work themselves and to share their impressions with others. Readers who want external assessment of the quality of a work can wait until it comes in, and are those no worse off than they are in the current system. If implemented properly, such a system would get the best of both worlds – rapid access for those who want and need it, and quality control over time for a wider audience.

Assessing quality and audience without journal names

The primary reason the traditional journal-based peer review system persists despite its anachronistic nature is that the title of the journal in which a scientific paper appears reflects the reviewers’ assessment of the appropriate audience for the paper and their valuation of its contributions to science. There is obviously value in having people who read papers judge their potential audience and impact, and there are many circumstances where having an external assessment of a scientist’s work can be of use. But there is no reason we have to use journal titles to convey this information.

It would be relatively simple to give reviewers of published pre-prints a set of tools to specify the most appropriate audience for the paper, to anticipate their expected level of interest in the work, and to gauge the impact of the work. We can also take advantage of various automated methods to suggest papers to readers, and for such readers to rate the quality of paper by a set of useful metrics. Systems that use the Internet to harness collective expertise have fundamentally changed nearly every other area human society – it’s time for them to do the same for science.

Actions

A commitment to promoting pre-prints in biomedicine requires a commitment to promoting a new system for reviewing works published initially as un-reviewed pre-prints. Such systems are practical and a dramatic improvement over the current system. We call on funders and other stakeholders to endorse the universal posting of pre-prints and post-publication peer review as inseparable steps that would dramatically improve the way scientists communicate their ideas and discoveries. We recognize that such a system requires standards, and propose that a major outcome of the ASAP Bio meeting be the creation of an “International Peer Review Standards Organization” to work with funders and other stakeholders to establish these criteria and to work through many of the important issues, and then serve as a sanctioning body for groups of reviewers who wish to participate in this system. We are prepared to take the lead in assembling an international group of leading scientist to launch such an organization.

Posted in open access | Comments closed

The current system of scholarly publishing is the real infringement of academic freedom

Rick Anderson has a piece on “Open Access and Academic Freedom” at Inside Higher Ed arguing the open access policies being put into place by many research funders and some universities that require authors to make their work available under open licenses (most commonly Creative Commons’ CC-BY) are a violation of academic freedom and should be viewed with skepticism.

Here is the basic crux of his argument:

The meaningful right that the law provides the copyright holder is the exclusive (though limited) right to say how, whether, and by whom these things may be done with his work by others.

So the question is not whether I can, for example, republish or sell copies of my work under CC BY — of course I can. The question is whether I have any say in whether someone else republishes or sells copies of my work — and under CC BY, I don’t.

This is where it becomes clear that requiring authors to adopt CC BY has a bearing on academic freedom, if we assume that academic freedom includes the right to have some say as to how, where, whether, and by whom one’s work is published. This right is precisely what is lost under CC BY. To respond to the question “should authors be compelled to choose CC BY?” with the answer “authors have nothing to fear from CC BY” or “authors benefit from CC BY” is to avoid answering it. The question is not about whether CC BY does good things; the question is whether authors ought to have the right to choose something other than CC BY.

Although for reasons I outline below I disagree with Anderson’s conclusion that concerns about academic freedom should trump the push for greater access, the point bears some consideration, especially because he is far from the only one raising it.

But what actually is this “academic freedom” we are talking about?  I will admit that, even though I am a long-time academic, and have a general sense of what academic freedom is, when I first started hearing this complaint about open access mandates, I didn’t really understand what the term “academic freedom” actually means. And part of the problem is that there isn’t really a thing called “academic freedom”.

The Wikipedia definition pretty much captures the concept:

Academic freedom is the belief that the freedom of inquiry by faculty members is essential to the mission of the academy as well as the principles of academia, and that scholars should have freedom to teach or communicate ideas or facts (including those that are inconvenient to external political groups or to authorities) without being targeted for repression, job loss, or imprisonment.

But this broad concept lacks a unified concrete reality. Anderson cites as his evidence that CC-BY mandates violate academic freedom the following passage from the widely-cited “1940 Statement of Principles on Academic Freedom and Tenure” from the American Association of University Professors:

Teachers are entitled to full freedom in research and in the publication of the results, subject to the adequate performance of their other academic duties; but research for pecuniary return should be based upon an understanding with the authorities of the institution.

Note that while this document provides a definition of academic freedom that has been fairly widely accepted, it is not in any way legally binding nor, more importantly, does it reflect a universal consensus about what academic freedom is. Nonetheless, it’s hard not to get behind the general principle that academics should have the “freedom to publish”. However, it is by no means clear what this actually entails.

Virtually everything I have ever read about academic freedom starts with the importance of giving academics the freedom to express the results of their scholarship irrespective of their specific conclusions. We grant them tenure in large part to protect this freedom, and I know of no academic who would sanction their employer telling them that they can not publish something they wish to publish.

But imposing a requirement that academics employ a CC-BY license does not impose a restriction on the content of their publication, but rather imposes a limit on venues available for publication (and it’s important for open access supporters to acknowledge this – there exist journals today that would not accept papers that were available online elsewhere, with or without a CC-BY license). But I’m not sure this constitutes a limit on academic freedom?

Clearly some restrictions on venues would have the effect of restricting authors’ ability to communicate their work. If a university told its academics that they could only publish in venues that appeared exclusively in print, they would unambiguously limit their ability to communicate and we would not sanction it. But what if they required that all works be available online to facilitate assessment and access for students? This would also impose some limits on where they could publish, but, in the current online-heavy universe, this would not be a meaningful limit on the authors’ ability to communicate.

So it seems to me that we have to make a choice. Approach 1 would be to evaluate such conditions on a case by case basis to determine if the limitations placed on authors actually limit academic freedom.  Approach 2 would be to enshrine the principle that any conditions placed on how or where academics publish by universities and funders are unacceptable.

If we take the case-by-case approach, we have to ask if the specific requirement that authors make their work available under a CC-BY license constitutes an infringement of their freedom to communicate their work. It certainly imposes some limits on where they can publish, but, given the wide diversity of journals that don’t prohibit pre-prints, it’s hard to describe this as a significant infringement.

The second issue raised by Anderson, that by requiring CC-BY and thereby granting others the right to reuse and republish a work without author permission you are depriving authors of the right to control how their work is used. I am a bit sympathetic to this point of view. But in reality authors have actually already lost an element of this control, as the fair use component of copyright law grants others the right to use published works in certain ways without author permission (to write reviews of the work, for example), so it’s hard to see this as a major intrusion.

Which brings me to one of my main points. Anderson argues that the principle of “freedom to publish” should be sacrosanct. But it clearly is not. While scholars my have the theoretical ability to publish their work wherever they want to, in reality the hiring, promotion, tenure and funding policies of universities and funding agencies impose a major constraint on how and where academics publish. Scientists are expected to publish in certain journals, other academics are expected to publish books with certain publishers. Large parts of the academic enterprise are currently premised on restricting the freedom of academics to publish where and how they want. In comparison to these restrictions – which manifest themselves on a daily basis – the added imposition of requiring a CC-BY license seems insignificant.

Furthermore, one has to view the push for CC-BY licenses in a broader context in which they are part of an effort to alter the ecology of scholarly publishing so that authors are not judged by their publication in a narrow group of journals or with a narrow group of university presses. Thus I would argue that, viewed practically, the shift to CC-BY would actually promote academic freedom and the freedom of authors to publish how and where they want.

One could reasonably respond that it’s not my place to decide on behalf of other scholars what does and does not constitute an imposition of their academic freedom. Which brings us to approach 2, enshrining the principle that any conditions placed on how or where academics publish by universities and funders are unacceptable. If you hold this position then you will clearly view a mandatory CC-BY policy as an unacceptable imposition of academic freedom. But you would then also have to see the hiring, promotion, tenure and funding policies that push authors to certain venues as an even bigger betrayal of academic freedom. I am happy to completely embrace this point of view.

In the end, I didn’t find Anderson’s article as repugnant as many of my open access friends did. Academic freedom is important, and it should be defended. And the points he raised are interesting and important to consider. But I take exception with Anderson’s focus on the supposed negative effects of the use of a CC-BY license on academic freedom, when, if we are serious about defending academic freedom we should instead be looking at how the entire system of scholarly publishing limits it. Indeed, I have now been inspired by Anderson’s article to make academic freedom a major lynchpin of my future arguments in favor of fundamental reform of scholarly publishing.

 

Posted in academic freedom, open access, public access, science | Comments closed