Relevance of data to humans should be assumed in the absence of appropriate data demonstrating non-relevance.In other words, if a lab rat gets sick or even simply shows a hormonally related effect (positive, negative, or largely irrelevant to health) when exposed to massive doses of a chemical, then we must assume the chemical will have such effects on humans—at least for regulatory purposes. In reality, such tests tell us very little about the impacts of trace-level chemical exposures to humans. For example, high doses of many healthy foods from broccoli to soy have shown effects in rodents, but that does not mean we have to regulate them or that they pose any significant risks to humans at current exposure levels. The preference for regulation in cases where we cannot disprove an adverse effect is the essence of the precautionary principle. In the Food and Chemical Toxicology article, these scientists explain: “All as scientists should know, it is biologically and statistically impossible to demonstrate ‘absence of effect’ and thus ‘absence of relevance.’” The irony is that the resulting policies of precaution are not only arbitrary, they are dangerous. The authors explain:
Regulations that profoundly affect human activities, that legally impose significant fines and even detention, should not be based on irrelevant tests forced to be regarded as relevant by administrative dictates, and on arbitrary default assumptions of no thresholds. Such standards would be contrary not only to science, but to the very principles of an enlightened governance and social contract. Not only scientists but society itself would pay dearly if unscientific approaches were to undermine our everyday practice of science, and the stringency of data analysis and evaluation developed by scientific thinking over the past centuries. In the present instance, the very credibility of thorough and robust teaching, research, and scientific analysis is questioned. This calls for action, and as beneficiaries of public support it is the utmost responsibility of us scientist to resist and counteract any efforts that undermine the core of science and its continuing promise for the betterment of the human condition on the planet.
At a recent seminar hosted by George Washington University’s Regulatory Studies Center, Dr. Gary Marchant presented a paper that demonstrated the absurdity of the precautionary principle. This chart below from that report highlights some examples of how excessive precaution forces us to abandon sound science and reasonable policy.Unfortunately, the problem is pervasive and not just among regulators but among scientists themselves. Many are pushing ahead a host of unfounded claims, particularly about chemicals, when the science simply does not support those claims. In addition to the politics surrounding precaution, scientists have myriad incentives to issue such junk science, not the least of which involves financial support for their research. Getting published is a draw for additional research funds, and journals like to publish positive results because they are more interesting. Accordingly, researchers work hard to produce positive findings, and some will even add spin to weak and meaningless "findings" to make them sound more significant to garner publication and media interest. Left-leaning advocates point to such biases amongst industry researchers working for drug approvals in the pharmaceutical industry. But the problem is arguably more prevalent within the politically driven field of government-funded chemical and environmental policy where, along with funding, political biases play a big role. Unlike industry, government and university researchers are not accountable should they make a mistake. For example, if a drug harms the public, pharmaceutical companies pay dearly and can be driven out of business. In contrast, government and tenured academic researchers continue their work even when useful products are removed from commerce because of their research claims. Science conducted on the chemical Bisphenol A (BPA) offers myriad examples. Consider how researchers misuse the emerging field of epigenetics to wrongly attack BPA as an “obseogen.” Even the name is cleverly marketed for capturing headlines, but the research is weak and has not been reproducible. Other studies report weak associations in an attempt to gain relevance by suggesting their otherwise worthless findings are made stronger because they are consistent with other studies, which also turn out to be meaningless. Yet grouping numerous, largely meaningless studies together do not make a compelling scientific case, even if they do capture headlines. The larger more compelling and scientifically robust research studies and comprehensive analyses of the full body of research indicate that current BPA uses are safe. Yet this research does not generate as many headlines. Great danger lurks when science become so distorted. In the BPA case, it may lead to bans on its use in food can linings where it prevents the development of pathogens such as E. coli. People may get sick, some may die, but no one will blame junk science or the researchers who advanced it. Cheers to the scientists who are taking a stand this month in Food and Chemical Toxicology. They might reflect the views of a majority, but when it comes to speaking out against the abuse of science, they represent a small minority.