CEI Comments on EPA Transparency Rule

transparency

View Full Document as PDF

The Competitive Enterprise Institute (CEI) strongly supports EPA’s effort to increase transparency in scientific research underlying agency rules and research. CEI detailed in a 2018 paper why the goals of the transparency rule are both essential to sound policymaking and are achievable.[1] That paper is included with these comments for reference. 

Researchers around the world have recognized the need to improve transparency in scientific research by developing ways to systemize the process. In May 2014, Nature Publishing Group even launched a new publication called Science Data, whose mission is to find ways to promote scientific data sharing. Discussing the launch of this new publication, an editorial in Nature Geoscience proclaimed: “The question is no longer whether research data should be shared, but how to make effective data sharing a common and well-rewarded part of research culture.”[2]

Unfortunately, the scientific community at large has not yet overcome resistance to data sharing among researchers, even though the community’s refusal to share data undermines the scientific process. Scientists Chris Chambers of Cardiff University and Brian Nosek of the University of Virginia explain in The Guardian that transparency has become “undervalued” because researchers want to get published, and “the fierce competition of academia rewards those who secure large grants and publish innovative—if tentative—findings in prestigious journals, not scholars who instead focus on being transparent and careful.”[3] As a result, researchers often avoid data sharing and transparency out of fear that their novel findings may eventually be proved wrong when someone tries to replicate their work.

This reality has fostered both unintentional bias and scientific mischief, including the propensity for researchers to work the data until it generates a positive finding. As James Mills of the National Institute of Child Health and Human Development lamented back in 1993 in the New England Journal of Medicine: “‘If you torture your data long enough, they will tell you whatever you want to hear’ has become a popular observation in our office.”[4] The scope of this problem[5] is also detailed by the National Association of Scholars in its recent report, The Irreproducibility Crisis of Modern Science.[6] The authors highlight one outrageous case where Professor Brian Wansink, head of Cornell University’s food behavior research lab, literally bragged in a blog post that he had schooled one of his students on how to churn data to generate positive results and get them published.[7]

Even when researchers do not torture data, many positive associations will occur by mere chance or unintentional biases. Stanford Professor of Medicine John Ioannidis demonstrated in a research article in 2005 that most published research findings are false positives. He explained: “Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.”[8]

Unfortunately, regulators may rely on much of this faulty research leading to poorly designed and often far too onerous regulations that impose burdens on society for no benefits in return. In many cases, regulations prove counterproductive and do more harm to public health than good. Accordingly, it is essential that regulators demand that the studies underlying their regulations meet the highest possible standards to ensure their validity. Increasing transparency through data availability will allow others to validate research findings or demonstrate when studies are not an appropriate basis for regulatory decisions.   

EPA’s Science Advisory Board (SAB) likewise noted the important role that this rule could play in terms of improving science and related agency decisions. Specifically, the SAB notes in its comments:

The SAB recognizes the importance of this rule. It can enhance public access to scientific data and analytical methods, and help ensure scientific integrity, consistency and robust analysis. Strengthening transparency by improving access to data can lead to an increase in both the quantity and the quality of evidence that informs important regulatory science and policy decisions. The scientific community is moving toward adopting the precept of sharing accurate data and information to foster credibility, high-quality outcomes and public confidence in science. The SAB supports the adoption of this precept. …

To ensure that the rule is evidence-based EPA must provide greater clarity regarding details of the rule and how it will be implemented, as well as example analyses of how it would be deployed. The development of additional policy and/or guidance documents is strongly recommended to provide clarity on the procedures for conducting the proposed efforts.[9]

CEI agrees that an agency rule that promotes transparency in science is greatly needed and that the proposed rule would benefit from some improvements to ensure it meets its goals. Some revisions to the original rule made with the supplemental rule have improved several key elements overall, yet other revisions will seriously weaken the rule’s value and ability to achieve its goals. Accordingly, we offer the following input and suggestions.

I. Expanding the Scope of the Rule.

CEI commends elements in Section 30.2 of the supplemental proposal that widen the scope of the rule. In particular, the addition of the definition for “influential information” and its inclusion in the definition of “pivotal regulatory science” is welcome because it indicates that the rule could also apply to research that impacts both policy decisions and private sector decisions. This is important because any agency science can have far-reaching marketplace impacts, both positive and negative. It only makes sense that the agency science—including “nonregulatory activities” such as research studies or risk assessments—complies with the best available scientific standards and transparency requirements.  

In addition, the expanded number of terms and clarifications in the definition section represents an overall improvement to this section.

CEI also welcomes the provision in Section 30.3 that expands the rule to apply to more than dose response studies, noting application to “data and models.” We agree that the rule should not be limited to a subset of scientific studies but instead it should strive to improve all science that the agency relies upon. However, this section limits the rule to science that is “used to justify significant regulatory decisions,” which might be construed to exempt “non-regulatory” agency research, such as risk assessments and other studies conducted inside EPA’s Office of Research and Development. This section should be improved to state that the rule applies to all science “used to justify significant regulatory decisions or research produced by agency research programs for regulatory and educational functions.”

II. Exclusions and Exemptions in Section 30.3

CEI is concerned that too many provisions in this rule either exclude certain science or give the EPA Administrator too much power to disregard the rule completely. While it is especially important to ensure any data released is done so in a fashion that protects the privacy of individuals, such goals should be attainable without having to completely destroy the utility of this rule, and the rule should note that such cases would be rare.

Exclusions Related to “Conflicts” with Statutory Law. The supplemental states that the rule would not apply when its provisions “conflict” with statutory law. Obviously, laws prevail over agency rules, so this stated exclusion is not necessary. In any case, it is not clear that any law on the books would prevent the agency from ensuring that any science it uses meets basic scientific transparency standards. And in the unlikely situation that questions arise, this issue should remain within the purview of the courts to determine whether a conflict exists. Including this provision appears to do nothing more than give the Administrator an excuse to waive the rule for rather arbitrary reasons.

Exclusions of Research Deemed Not a “Significant Regulatory Action.” Section 30.3 excludes anything not considered a “significant regulatory action,” such as adjudications, enforcement actions, and permit proceedings. Some provision may be necessary to ensure the agency does not needlessly delay permits and the like. But, as noted, the rule should apply to agency-produced science, including risk assessments and research that could have impacts in the marketplace.

Exclusions of Drafts and Preliminary Analyses. Section 30.3’s exclusions of drafts and preliminary analyses are too vague and may exclude exceptionally important research. It might make sense to exclude preliminary analyses that are not publicly available. However, anything the agency releases or discusses publicly as a possible justification for a regulatory standard or for educational purposes should meet the transparency rule. Draft risk assessments, for example, may have profound market impacts, including impacts on litigation, and hence, should not be exempt from the rule.

Exclusions of “Lab Samples.” Section 30.3 explicitly states that the rule does not apply to physical objects such as lab samples. This exclusion may be intended merely to address a practical matter to make clear that researchers need not submit their lab samples to EPA or the public. However, it raises some issues about how the rule might include provisions to promote sound research lab practices. One approach might be to require that the agency place greater weight on studies that can demonstrate the application of “Good Laboratory Practices” (GLP).[10] In fact, poor lab controls—which can lead to such problems as inadvertent contamination of samples—can produce vastly distorted and misleading study findings. GLP-compliant studies meet higher standards in terms of study design, implementation, and peer review, which is why they should bear more weight in regulatory decisions. As a result, it is common worldwide that industry must apply GLP when conducting research for submission to regulatory bodies for the purpose of ensuring “uniformity, consistency, reliability, reproducibility, quality, and integrity of chemical (including pharmaceuticals) nonclinical safety tests.”[11] Accordingly, greater transparency related to the handling and management of lab samples should be included as part of the transparency rule.

III. Administrator’s Right to Waive the Rule as “Impractical.”

Section 30.9 allows the EPA Administrator to issue exemptions to this rule on a “case-by-case” basis if the Administrator deems accessing the data to be impractical. This provision essentially makes the entire rule nearly meaningless. It might be used as an excuse to exempt all research conducted before implementation of the rule, which will greatly undermine the purpose of this rule. If there are to be any exemptions, the rule should delineate the specific circumstances and underscore the fact that such cases would be rare. And in those rare cases, the rule should place the burden on the agency to demonstrate that there were not reasonable means of releasing data without compromising privacy of the subjects or revealing confidential business information. The EPA Administrator’s opinion about practicality should not be the governing standard.

The agency’s SAB also pointed out the obvious problems with this provision as well, noting:

Case-by-case exceptions without criteria may create public concerns about inappropriate exclusion of scientifically important studies. A framework and/or guidance document could also help EPA clarify how the rule will affect current scientific review procedures. It might be useful for the EPA to consider recommendations from a scientific and advisory committee when making waiver decisions.

The SAB suggestion that the agency deploy a formal process for making such determinations is sound advice. However, the rule itself—rather than a future guidance document—should outline detailed standards for the rare occasion where some portion of the data might not be widely available. 

As part of this framework, the rule should provide clear and detailed guidelines and a formal process the agency must follow before deciding to exclude data of any kind from new or old studies. Such guidelines should include a provision that only the portions of data that would trigger violations of privacy or confidential business secrets may be excluded, and that all other data and methodologies should be made transparent and publicly available. Before excluding data, EPA should conduct and make public a systematic review of the scientific literature to determine if there is an alternative study that could be used in place of any study whose data they cannot fully access. If such studies exist and data can be made accessible, EPA should rely on the alternative study or studies in lieu of those whose data is not available. If an alternative study or studies do not exist, then the Administrator should only use the original study if the researchers demonstrate that they explored all scientific methods and tools available to make the data available, but none proved sufficient to protect individual subjects’ privacy and/or confidential business secrets.

If after such efforts the Administrator deems the original study essential for agency decision making, the agency should publish its systematic review along with a detailed analysis explaining the specific legal reasons why data must be excluded. The Administrator should further detail why the study authors were unable to provide the data in a format that allows reanalysis and replication without compromising privacy or confidential business secrets.  It should also include details on how much of the data fall within the exclusion, since only those specific pieces of information should gain exclusion. Details about study methods, model assumptions, and the like should always be transparent. And before the agency uses the study, this agency justification should be subject to public comment.

If after public comment the Administrator still determines that the study is essential and some data cannot be released publicly, then the agency shall set up a system for limited release. The agency would then need to set up a committee of researchers to receive all the data subject to nondisclosure agreement. These researchers should include members with a wide range of perspectives on the issue to allow independent analysis, validation, and peer review. Should the original study authors refuse to provide the data under this closed, limited-access process, the study should be excluded from EPA decision making. Refusal of such limited independent validation renders the study largely meaningless from a policymaking perspective as there is no way to determine if the findings have a relationship to scientific reality.

Sincerely,

Angela Logomasini, Ph.D.

Senior Fellow

Competitive Enterprise Institute

 


[1] Angela Logomasini, “EPA Transparency Rule Will Bolster Science and Improve Rulemaking When Rules are Based on the Best Available Data, Regulations Are More Effective,” CEI On Point 3246, July 17, 2018, https://cei.org/sites/default/files/Angela%20Logomasini%20-%20EPA%20Transparency%20Rule%20Will%20Bolster%20Science%20and%20Improve%20Rulemaking%20%281%29.pdf.

[2] “More Bang for Your Byte,” Scientific Data, Vol. 1 (May 27, 2014), p. 1, https://www.nature.com/articles/sdata201410.  

[3] Chris Chambers and Brian Nosek, “The First Imperative: Science That Isn’t Transparent Isn’t Science,” The Guardian, June 25, 2015, https://www.theguardian.com/science/head-quarters/2015/jun/25/the-first-imperative-science-that-isnt-transparent-isnt-science.

[4] James L. Mills, “Data Torturing,” New England Journal of Medicine, Vol. 329 No. 16 (October 14, 1993), pp. 1196-1199, https://www.nejm.org/doi/full/10.1056/NEJM199310143291613 .

[5] Monya Baker, “1,500 Scientists Lift the Lid on Reproducibility: Survey Sheds Light on the ‘Crisis’ Rocking Research,” Nature, Vol. 553, No. 7604 (May 25, 2016; corrected July 28, 2016): pp. 452-454, https://www.nature.com/polopoly_fs/1.19970!/menu/main/topColumns/topLeftColumn/pdf/533452a.pdf.

[6] David Randall and Christopher Welser, “The Irreproducibility Crisis of Modern Science: Causes, Consequences, and the Road to Reform,” National Association of Scholars, April 2018, https://www.nas.org/images/documents/NAS_irreproducibilityReport.pdf.  

[7] Ibid, p. 17.                                                              

[8] John P.A. Ioannidis, “Why Most Published Research Findings Are False,” PLoS Medicine. Vol. 2, No. (2005): p. e124, https://doi.org/10.1371/journal.pmed.0020124.

[9] Science Advisory Board (SAB) Consideration of the Scientific and Technical Basis of EPA’s Proposed Rule Titled Strengthening Transparency in Regulatory Science, April 24, 2020, p. 1, https://yosemite.epa.gov/sab/sabproduct.nsf/LookupWebReportsLastMonthBOARD/2DB3986BB8390B308525855800630FCB/$File/EPA-SAB-20-005.pdf.

[10] Handbook: Good Laboratory Practices, 2nd ed. (Geneva: World Health Organization, 2009), http://www.who.int/tdr/publications/documents/glp-handbook.pdf.

[11] J. Kirk Smith, “A Comparison of the Guidance of FDA, OECD, EPA, and others on Good Laboratory Practice,” in Handbook of LC-MS Bioanalysis: Best Practices, Experimental Protocols, and Regulations, 1st Ed. Wenkui Li (Editor), Jie Zhang (Editor), Francis L. S. Tse (Editor) (Hoboken, NJ: Wily, 2014).