Montana District Court Judge Brian Morris on February 1 vacated the Trump administration’s Science Transparency Rule. The judge’s decision was a victory for environmental plaintiffs and the Biden administration.
Vacatur was inevitable once the judge decided, on January 27, that the Transparency Rule exceeds its purported statutory basis—the Federal Housekeeping Statute, which authorizes agency heads to regulate internal organizational practices and procedures. The Transparency Rule is no “mere housekeeping measure” or “procedural rule,” the judge argued. Rather, it is a “substantive” rule affecting both policy “outcomes” and plaintiffs’ “concrete interests.”
The Trump Environmental Protection Agency (EPA) considered promulgating the Transparency Rule under the environmental statutes it administers (83 FR 18769). It chose instead to promulgate the rule solely under the Housekeeping Statute, which provides no authority for substantive rules. Thus, the Transparency Rule lacks a legal basis.
This sudden turn of events is a shame. The Trump EPA held two comment periods, received more than 1 million comments, and developed the rule over a period of 32 months (April 30, 2018 – January 6, 2021). Contrary to its critics, with whom Judge Morris seems to sympathize (p. 9), the Transparency Rule responds real and serious problems. If implemented, it could have made the EPA’s use of science in regulation more transparent, rigorous, and accountable.
A common criticism lodged against the Trump EPA’s April 30, 2018 proposed Transparency Rule is that it never identified the problem to which it is a solution. For example, the Environmental Protection Network (EPN), a group of prominent environmental advocates, complained that the EPA had not produced a list of previous “bad decisions” due to the agency’s use of non-transparent studies.
Demanding that the EPA publish a list of bad decisions is a bit much. It would be impolitic, destroying any prospect of the Trump appointees working collegially with career staff. Besides, each “bad decision” determination might well require a rulemaking of its own.
More importantly, the core thesis of the Transparency Rule is not that the EPA knows which previous decisions were bad. It is that both the EPA and the public can have greater confidence in the reasonableness of the agency’s decisions if the scientific findings informing those decisions can be independently validated.
Contrary to the critics, the proposed Transparency Rule clearly identified one major problem the EPA sought to address: the “irreproducibility crisis.” Many so-called scientific results cannot be reproduced. All too often, quality control in several fields of scientific research is failing.
The Economist, in an editorial cited by the proposed rule, outlined the dimensions of the problem:
Too many of its findings that fill the academic ether are the result of shoddy experiments or poor analysis. … A rule of thumb among biotechnology venture capitalists is that half of published research cannot be replicated. Even that may be optimistic. Last year researchers at one biotech firm, Amgen, found they could reproduce just six of 53 “landmark” studies in cancer research. Earlier, a group at Bayer, a drug company, managed to repeat just a quarter of 67 similarly important papers. A leading computer scientist frets that three quarters of papers in his subfield are bunk. In 2000-10 roughly 80,000 patients took part in clinical trials based on research that was later retracted because of mistakes or improprieties.
In another article cited by the proposal, economists Randy Lutter and David Zorn provided similar evidence of widespread and serious lapses in quality control:
Over the past few decades, the quality of published scientific research—which underlies most federal efforts to protect consumers generally, and specifically health, safety and the environment—has increasingly come into question. Researchers who tried to replicate the results of peer-reviewed psychology studies succeeded 40 percent of cases. A similar attempt in the field of cancer biology was successful in only 10 percent of cases. When asked, scientists are a little more optimistic. A 2016 survey by the journal Nature found that 73 percent of over 1,500 scientists surveyed believed that at least half of the literature in their field could be independently replicated. In other words, more than one out of four scientists surveyed believed that most of the peer-reviewed literature in their field was not credible.
Lutter and Zorn acknowledged that the data in many health studies are subject to legally binding confidentiality agreements. In such cases, the authors recommended that when the EPA uses “research based on non-public data,” it should “explain why it believes the research is nonetheless sufficiently reliable to be used for regulation.” That is roughly what the final Transparency Rule requires.
The Economist identified three main causes of low-quality science: The publish-or-perish imperative of academic life; the associated career-related inducements to exaggerate, cherry pick results, and spin causality out of chance correlations; publication bias against studies finding negative results; and lax peer review. For example, the article reported, “When a prominent medical journal ran research past other experts in the field, it found that most of the reviewers failed to spot mistakes it had deliberately inserted into papers, even after being told they were being tested.”
The Transparency Rule could mitigate some of those problems at the margins. For example, for studies identified as “pivotal” to regulatory decisions, the rule requires the EPA to consider whether it should conduct a more thorough level of peer review than the study received prior to publication.
The rule also requires the EPA to give greater weight to pivotal studies that are either fully transparent (all models and data are accessible to the public) or available for independent validation under restricted access protocols. The mere knowledge that one’s research may be audited by others who have no stake in promoting it, or who may even be motivated to find flaws in it, should encourage all researchers to be more rigorous.
Another problem addressed by the Transparency Rule, albeit by implication rather than expressly, is the capture of public policy by government-employed and -funded experts. This is the theme of my colleague Pat Michaels’s book Scientocracy: The Tangled Web of Public Science and Public Policy.
Lest there be any confusion, the term “scientocracy” is not the name of a conspiracy theory and does not imply the existence of a secret master plan. Rather, it evokes the swirl of self-aggrandizing special interests predicted by public choice theory to result from the interaction of science and government. Public choice theory is simply the application of economic analysis to people’s actions in the arenas of government and politics. The basic premises are those of Econ 101. Most people most of the time act in their perceived self-interest. Incentives matter because people respond to them.
Two main behavior patterns predictably arise from the interaction of government and science. First, agencies will tend to fund research supportive of their budgetary and regulatory agendas. Second, universities will tend to hire and promote researchers who win agency funding (partly because large chunks of federal grants cover departmental administrative expenses).
Several follow-on consequences are also predictable. The same agency-funded researchers will supply most of the editors and peer reviewers of academic journals. In time, both universities and the academic literature will exhibit high levels of groupthink.
In addition, because public perceptions of environmental and health crises tend to expand the power, importance, and budgets of regulatory agencies, the academic literature will increasingly align with the agendas of policy makers preaching alarm and demanding new or more aggressive regulation of private decisions and activities.
President Eisenhower’s Farewell Address, famous for decrying the “unwarranted influence” of the “military-industrial complex,” also called out the government’s expanding role in the funding and direction of academic research as potentially dangerous to freedom of inquiry and democratic accountability.
Noting that a “steadily increasing share” of scientific research “is conducted for, by, or at the direction of, the Federal government,” Eisenhower observed that “the free university, historically the fountainhead of free ideas and scientific discovery, has experienced a revolution in the conduct of research.” Partly due to the huge costs involved, “a government contract becomes virtually a substitute for intellectual curiosity.” He cautioned: “The prospect of domination of the nation’s scholars by Federal employment, project allocations, and the power of money is ever present and is gravely to be regarded.” While Americans should respect scientific research, “we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite” (emphasis added).
That was in 1961. Since then, the administrative state, the dependence of university research on government contracts, and the webs of entanglement have exploded. For example, in Chapter 11 of Scientocracy, University of Virginia Law School professor Jason Johnston estimates the EPA has provided more than $210 million since 1999 to five university departments and the Health Effects Institute just to study the impacts of one form of air pollution—particulate matter—on public health. An entire academic discipline depends financially on one agency enmeshed in politics and wielding vast coercive powers.
As federal support for academic research has increased, academic freedom has declined. Most campuses today do not tolerate intellectual diversity. Granted, the “closing of the American mind” has multiple causes, but Ike really was on to something.
While the Transparency Rule is not a solution to scientocracy, it could help the public evaluate the reasonableness of agency decisions. The rule requires the EPA to identify which studies are “pivotal” to its regulatory decisions. That modest directive is significant because the agency has traditionally based its decisions on “weight of evidence” assessments. Such assessments purport to synthesize a vast literature, much of which is paywalled.
Commenters with the appropriate knowledge and experience to review aspects of the science often do not have the time or access to engage an entire literature. In contrast, if the agency identifies “pivotal” studies, then, for example, statisticians and others with statistical training might well be able to critique the statistical aspects of those studies. Such persons may also be able to compare the agency’s pivotal study to other credible studies that reach different or contrary conclusions. The same holds for people versed in other disciplines relevant to environmental science.
In short, the Transparency Rule combats the capture of public policy by agency-funded elites by encouraging a more open and competitive marketplace of ideas.
The rule’s preference for transparent studies works to the same end. It is a scandal that during the EPA’s quarter century of regulating fine particulate matter (PM2.5) pollution, only one independent researcher, retired UCLA professor James Enstrom, managed to obtain the population data from the foundational epidemiological study (Pope et al. 1995) for the EPA’s first (1997) PM2.5 national ambient air quality standard (NAAQS). Contrary to the original researchers, Enstrom found no significant relationship between PM2.5 and total mortality in the 1995 study’s population sample. Enstrom used de-identified data, and four years later, no subject participants or their families have lodged privacy complaints.
The Transparency Rule would make such re-analyses of pivotal studies more common, contributing to the open and competitive environment in which science—as distinct from propaganda—thrives.
If the Transparency Rule is substantive because it limits the EPA’s discretion in ways that shape public policy, and therefore cannot be promulgated under the Federal Housekeeping Statute, what alternative authorities are available? In my comment letter, I tried to rebut the EPN’s argument that none of the statutes the EPA administers allows the agency to prefer transparent to non-transparent science. I argued about as follows.
The EPN letter reviews passages from all eight statutes the proposed Transparency Rule cited as potential authority for changing how the EPA evaluates science in rulemaking. The EPN contends that because no statute lists “transparency” as a factor when evaluating scientific studies, the EPA has no authority to prioritize transparency.
That argument looks impressive at first blush but comes to nothing. Although the statutes repeatedly tell the EPA to use the “best available science,” “best peer-reviewed science,” or “latest science,” they do not define what science is. They do not because educated people, including lawmakers, are supposed to have some familiarity with science and how it works.
Science is a mode of enquiry employing a specific method to understand the world. The hallmark of the method is testing hypotheses against data or observations. It is a highly democratic method in the sense that the correctness or incorrectness of a hypothesis has nothing to do with a researcher’s reputation, party affiliation, or professional credentials. What matters is the agreement or disagreement between hypothesis and data.
Since testing hypotheses against data is the heart of the scientific enterprise, claims based on dark (secret) data are “science” only in an incomplete or equivocal sense. The results of a dark data study may be correct, but the study does not deserve the same degree of public confidence as science capable of independent validation. “Trust me science” is an inferior brand.
The EPN accuses the EPA of trying to “censor” science, by depreciating studies based on secret data. However, the EPN has no problem with the EPA “censoring” science by preferring peer-reviewed to non-peer-reviewed studies. However, peer review just means that some scientifically trained people found no obvious errors in a study and think it interesting enough—or politically valuable enough—to publish. As a quality control method, peer review is inferior to post-publication audit by researchers motivated to pick it apart, including its selection and handling of data.
Transparency is implicit in the idea of the “best available science.” Neither the public nor the agency can have full confidence that a particular study is the genuine article if any important part of it is forever shielded from scrutiny by independent researchers. To belabor what should be obvious, transparency both enhances and reveals the quality of scientific work.
The very nature of the scientific method as a data-centric mode of inquiry justifies both the EPA’s preference for studies that can be independently validated and the agency’s attempt to create incentives increasing the supply of such studies over time.