CPSC’s Scientific Shenanigans on Phthalates

Many “stakeholders” have complained about the process through which the Consumer Product Safety Commission (CPSC) developed its proposed rule related to a class of chemicals called phthalates—and rightly so. In particular, the agency’s failure to allow public comment and open peer review of its Chronic Hazard Advisory Panel report (CHAP report) underscore the fact that bureaucrats want to avoid scrutiny that might hold them accountable for rash and unscientific decisions. 

Designed to make plastics soft and pliable, these chemicals have many valuable uses for making a wide range of products from blood bags, to rain boots and swimming pool liners as well as children’s toys, which are the subject of this regulation. Safely used for decades, activists and regulators are poised to essentially throw away these valuable technologies based largely on junk science. 

While this rule only affects toys that children might place in their mouths or chew, it sets a terrible precedent. I already detailed how this rule might harm consumers in a blog post last week. Now let’s look at the so-called “science” behind it.

The justification for the proposed regulations are found within the CHAP report, which is a review and risk assessment that the agency released in July 2014. A key problem stems from the fact that the CHAP report relies on a selective review of limited studies that offer scant evidence that individual phthalates or cumulative exposure pose any significant risk to humans at current exposure levels. 

Most of the CHAP-report-identified “evidence” that these chemicals pose health risks comes from lab tests that over-dose rodents to trigger health effects. Such tests are not particularly relevant to humans that better metabolize the substance and who are exposed to traces that are multitudes lower.

The human research highlighted in the CHAP report is not particularly compelling either. Many of these human studies are noted to be “small,” which limits their value for drawing any conclusions. And many of them report associations between potential health effects in babies whose mothers’ phthalate exposure levels were measured in single “spot” urine samples during pregnancy. Given that humans metabolize phthalates relatively quickly, one time spot measurements may be misleading about actual exposures, raising important questions about the utility of such studies.

We also must remember that associations do not prove cause and effect. Accordingly, if we are to use such statistical tests to draw conclusions, the body of research should include larger-scale studies that report consistently positive, relatively strong associations. But that is not the case in this situation. It is clear from simply reading the executive summary of the report that, overall, the human data is weak, inconsistent, and of limited value. The report itself reads:

Overall, the epidemiological literature suggests [emphasis added] that phthalate exposure during gestation may contribute to reduced AGD [reduced anogenital distance] and neurobehavioral effects in male infants or children. Other limited [emphasis added] studies suggest [emphasis added] that adult phthalate exposure may be associated [emphasis added] with poor sperm quality. The AGD effects are consistent with the phthalate syndrome in rats. However, it is important to note that the phthalates for which associations were reported were not always consistent and differed across publications. In some cases, adverse effects in humans were associated with diethyl phthalate exposure, although diethyl phthalate does not cause the phthalate syndrome in rats.

Judging from this statement, at best, the studies used for this CHAP report show either no associations or weak associations. Moreover, a body of research that merely “suggests” a relationship and is based on “limited studies” that are “not always consistent” does not sound at all compelling. Such terminology reveals that these studies are not really useful for drawing the conclusions found in the CHAP report.

Such concerns are detailed in an external scientific peer review published by ToxStrategies, which was funded by the American Chemistry Council and released in September 2014. In the ToxStrategies report, scientists offered independent opinions about the science and methodologies employed by CHAP report authors. And their comments underscore the panel’s highly questionable conclusions. For example, Douglas L. Weed, M.D., M.P.H., Ph.D (epidemiology), explained:

The CHAP report is not a systematic review of the available scientific evidence and, as such, is of questionable reliability and validity, lacking in the objectivity and transparency generally recognized as critical by the scientific community. The credibility of the recommendations in this report are therefore questionable, given that they are not “evidence-based” as the co-chair of the committee, Dr. Hauser, recognized and mentioned in a separate review published in the peer-reviewed literature (Braun et al., 2013).

Dr. Weed details further that the CHAP failed to provide a “critical and balanced review of the epidemiological evidence,” omitting “a relatively large number” studies that found no association between the chemicals and human health. “In addition,” he points out that “many of these reviews disagree with the CHAP report’s assessment of the epidemiology (and of the use of animal models to represent adverse health events in humans).” He continues: “The CHAP report misrepresents the results of some (but not all) of the available epidemiological evidence, ignoring or downplaying negative results and emphasizing positive (i.e. apparently harmful) results.”

But such shenanigans are just the tip of the iceberg! Not only does the report misrepresent the research studies, it employs outdated data for its risk assessment. Stay tuned for more on that tomorrow in another post.

For much more, see my comments to the CPSC on the proposed phthalates rule.