Precaution (Of A Sort) Without Principle

Miller and Conko Article in Priorities For Health: Volume 13, Number 3

Published by The American Council On Science And Health

There is a comprehensive proposition allegiance to which has eventuated in tens of thousands of human deaths and can threaten future generations. Although this proposition is in vogue, relatively few persons sympathetic to it recognize it by its name.<?xml:namespace prefix = o ns = “urn:schemas-microsoft-com:office:office” />

The Precautionary Principle

It used to be said that the most fearsome statement in the world is: “I'm from the government and I'm here to help you.” An apt variation of that theme would relate to governmental use of the so-called precautionary principle to reduce the risks of technology. Controversy regarding the applicability and worth of the precautionary principle stems from clashes over decades between environmental and public health activists on one side and scholars and risk-analysis professionals on the other. These clashes concerned the question of what constitutes appropriate handling of various risks, such as those from consumer products and industrial processing. The issues in such clashes have pertained, for example, to chlorinated water, manmade pesticides, synthetic hormones in beef, and gene-spliced foods. Underlying such controversies has been the question “As surrogates for society, how should regulators approach risk when certainty about potential harm, in terms of likelihood or magnitude, is lacking?”

Commentators exceptionally averse to risk have advocated establishment of a “precautionary principle” whose governmental implementation would reduce risks to the public. There is no principle or proposition so named that has a widely endorsed definition. A common formulation of the precautionary principle is that governments should take regulatory measures to prevent or restrict even actions (a) that are mere conjectural threats to human health or the environment and (b) whose potential significance as such a threat is questionable in terms of scientific evidence. Radical environmentalist groups have brandished the precautionary principle in prevailing on governments to assail and intimidate the chemical industry and, more recently, the food industry.

Incorporating the precautionary principle into policymaking is sometimes represented as “erring on the side of safety” or as faithful to the maxim “better safe than sorry”—the underlying assumption being that underregulation of risky activities could result in severe harm to human health or the environment, whereas “overregulation” causes little or no harm. This assumption is false. Typically, the precautionary principle is applied to R&D and to commercial products in a way that can increase risk. Indeed, governmental exercise of the precautionary principle has not only laid waste to several industries—it has also resulted in the loss of tens of thousands of human lives.

Yet radical environmentalists are demanding governmental application of the precautionary principle to a wider variety of technologies and product types. Such application could increase its death toll by orders of magnitude. Of course, with any new activity or product—ranging from the siting of a power station to the introduction of a new drug into pharmacies—one should consider potential risks before proceeding. But the precautionary principle focuses on the mere possibility that even well re-searched technologies could generate unique, extreme, or uncontrollable risks. The net effect of most technologies that have generated new risks has been favorable; typically, such technologies have diminished risks far more serious than those they entail—automobile air bags, blood transfusions, and MRI scans, for example. There is not so much as a hint of this in the precautionary principle.

Several subjective factors can cloud thinking about risks. Studies of risk perception have shown that persons tend:

  • to overestimate risks that are unfamiliar to them, inscrutable, invisible, out of their personal control, and/or potentially catastrophic—e.g., electromagnetic radiation or pesticide traces in foods—and
  • to underestimate risks they consider clear and comprehensible, such as chain-sawing or motor-cycling. 

Such misperceptions contribute importantly to the intricacy of efforts to regulate risk in the U.S.—for democracies must ultimately take into account public opinion, but good government also requires discounting heuristic errors or prejudices. British statesman Edmund Burke (1729-1797), a champion of human rights, emphasized government's pivotal role in making such judgments: “Your Representative owes you, not only his industry, but his judgment; and he betrays, instead of serving you, if he sacrifices it to your opinion.” Government officials should make decisions that are rational and promote the public interest, even when they expect that such a decision will be unpopular. This applies especially to government officials granted, as a shield against political machinations, what amounts to lifetime tenure.

Caution: Precautionary Principle at Work

Implementation of the precautionary principle is often hazardous because it draws the attention of consumers and policymakers from known, significant dangers to human health and diverts public health resources from the handling of such dangers. A case in point is the misbegotten crusade to eliminate all chlorinated compounds from all uses worldwide. By the late 1980s, radical environmentalists were attempting to convince water authorities around the world that carcinogenic byproducts from drinking-water chlorination might constitute a potential danger. Mired in a budget crisis, officials in the Peruvian government spun such allegations into the basis for discontinuing chlorination of drinking water in much of their country. That move contributed to the acceleration and spread of Latin America's 1991-1996 cholera epidemic, which killed at least 11,000 of its more than 1.3 million sufferers.

Antichlorine activists have since extended their campaign to so-called endocrine disrupters, or endocrine modulators, which they claim are responsible for various abnormalities and diseases. Some of the findings that were crucial to the development of the endocrine disrupter hypothesis either have been retracted or have turned out not derivable from replicates of the study associated with the findings in question. The American Council on Science and Health has explored the endocrine disrupter hypothesis and has decided that, while certain environmental contaminants at high levels adversely affect laboratory animals, in some cases through hormonal mechanisms, the exposures of humans to these contaminants are much less than those of lab animals tested for the effects of the contaminants. And the ongoing trend has been toward reducing concentrations of many such contaminants, including several conjectural endocrine disrupters, in air, water, and soil.

The observation is well documented that, because of differences in levels of susceptibility and exposure, a chemical that at high concentrations may cause cancer in laboratory animals is not necessarily a cancer risk for humans. No consistent association has been demonstrated between nonexperimental exposures to manmade chemicals in the environment and an increase in the incidence of cancers of hormonally sensitive human tissues. Human diets contain many phytoestrogens (estrogenlike substances that plants make), and dietary phytoestrogen exposures far exceed dietary exposures to alleged endocrine disrupters. But in the vast majority of instances of dietary phytoestrogen exposure, there has been no scientifically established link between such exposure and adverse human health effects.

From the available epidemiological data comes no consistent, cogent evidence of any causal relationship between human health problems and industrial chemicals that are conjectural endocrine disrupters. Yet, on the basis of the precautionary principle, radical environmentalists and many government regulators continue to call for restricting, or even banning outright, various products they represent as containing endocrine disrupters.

More recently, antichlorine activists have begun to assail phthalates—liquid organic compounds added to certain plastics to soften them. Many products for medical, industrial, and/or household use are made with phthalate-treated plastics, which have been around for more than 50 years. These include blood bags, tubing, flooring, wire coating, gloves, and such infant toys as rattlers and teething rings. Invoking the precautionary principle, the activists warn that phthalates might have numerous effects adverse to human health—even though the scientific weight of evidence opposes this caution.

Governments have taken such warnings seriously. Consequently, the use of phthalates has been restricted officially or made unlawful in several countries; whole industries have been terrorized; and consumer and medical options have been unnecessarily re-stricted.

There are perils of governmental efforts to regulate risks according to the precautionary principle that are more intricate and abstruse than those relating to restrictions on products beneficial to consumers. Expendi-tures for implementation of and compliance with any governmental regulation (rational or not) has an income-related effect that manifests the correlation of wealth and health—an issue the late political scientist Aaron Wildavsky popularized. It is not by chance, Wildavsky argued, that in rich societies the death rate is less than in poor societies. To diminish a community's wealth is to increase its health risks. Wealthy persons can acquire medical care that is superior to that which poor individuals can procure; they can much more easily afford diets that are both convenient and healthful; and on the whole their lives are less stressful. The adverse effects of income deprivation include stress-related problems, such as hypertension, heart attacks, depression, and suicide. Quantification of the relationship between income deprivation and mortality would be difficult, but findings from university-based studies suggest, from a conservative perspective, that for every $7.25 million spent in connection with governmental regulations, there is in consequence one additional human fatality. Thus, it would be reasonable to expect that the excess costs required by “precautionary” regulations for various consumer products—excess costs in tens of billions of dollars every year—would result in the loss of thousands of human lives per year. The expression “regulatory overkill” is not merely rhetorical.

Legitimizing the Illegitimate

Among skeptics during the last few years, questioning of the precautionary principle and its usage has intensified. In response to those challenges, the European Commission (EC), prominent as a user and abuser of the precautionary principle, last year published a formal report whose purpose was to clarify and popularize the principle. The EC resolved that precautionary restrictions under its auspices would be “proportional to the chosen level of protection,” “nondiscriminatory in their application,” and “consistent with other similar measures.” The EC also avowed that persons responsible for making decisions on behalf of it would weigh “potential benefits and costs.”

Yet in ongoing disputation between the EC and the United States and Canada over hormone-treated beef cattle, the EC has based its decision not on information from scientific risk assessment, but on the precautionary principle. It has argued that the precautionary principle sanctions restricting imports of American and Canadian beef from cattle treated with certain growth hormones. Both a World Trade Organization (WTO ) dispute resolution panel and the WTO's Appellate Board have acknowledged that the general meaning of the precautionary principle—”Look before you leap”—could be found in WTO agreements, but they have also pronounced that the presence of this meaning in such statements does not relieve the EC of its obligation to base its policies on scientific risk assessments. And in this case of hormone-treated cattle, such an assessment was clearly behind the stand of the Canadian and U.S. governments. A scientific committee created by the WTO dispute resolution panel decided that even the data from the scientific studies the EC cited in self-defense did not suggest a human safety risk from using the hormones in question according to accepted animal husbandry practices. The WTO ruled in favor of Canada and the U.S. because the EC had failed to demonstrate real or imminent harm from such use. Yet the EC continues to enforce restrictions on hormone-treated beef—a blatantly unscientific and protectionist policy belying the Commission's declarations that the precautionary principle will not be abused.

The Precautionary Principle and Biotech

Perhaps the European Commis-sion's most egregious use of the precautionary principle concerns products of the new biotechnology, or gene splicing. By the early 1990s, the EC and certain nations of Western Europe erected unscientific and unnecessarily strict rules regarding testing and commercialization of gene-spliced crop plants. In 1999 the EC explicitly invoked the precautionary principle in establishing a moratorium on approvals of all new gene-spliced crop varieties pending ratification of a proposal for even stricter regulations on such plants—regulations that would become incumbent on all the nations of the European Union. Notwithstanding the EC's assurances that the precautionary principle would not be abused, all of the Commission's stipulations have been ignored or distorted with respect to gene-spliced—in the EC's argot, “genetically modified,” or “GM”—foods. Rules for gene-spliced plants and microorganisms lack consistency, discriminate, and are disproportional to risk. Indeed, they are arguably inversely proportional to risk: Gene-spliced organisms are more precisely crafted and predictable than are organisms modified through traditional genetic techniques, but regulations concerning development and usage are much less stringent for the latter organisms than for gene-spliced organisms. This is a violation of the cardinal principle of regulation, which is that scrutiny should be commensurate with risk.

Dozens of scientific bodies—including the U.S. National Academy of Sciences (NAS), the American Medical Association, the Royal Society (of the United Kingdom), and the World Health Organization—have analyzed the issue of what constitutes appropriate monitoring of the enterprise of gene splicing. They have arrived at conclusions remarkably congruent with one another:

  • The high-tech methods for genetic improvement are an extension, or a refinement, of older, far less exact modes of genetic modification.
  • Producing new genes in a plant or microorganism does not make it less safe to the environment or for ingestion.
  • The kinds of risks associated with gene-spliced organisms are the same as those of organisms either unmodified or traditionally genetically modified.
  • Regulations should be based on the risk-related characteristics of individual products, and not on the techniques used in their development. 

The National Research Council (NRC)—the research arm of the NAS—analyzed modern gene splicing techniques in 1989. It concluded that “the same physical and biological laws govern the response of organisms modified by modern molecular and cellular methods and those produced by classical methods” and that gene splicing is more exact, circumscribed, and predictable than are traditional means of genetic modification. Its report of this examination states:

[Gene splicing] methodology makes it possible to introduce pieces of DNA, consisting of either single or multiple genes, that can be defined in function and even in nucleotide sequence. With classical techniques of gene transfer, a variable number of genes can be transferred, the number depending on the mechanism of transfer; but predicting the precise number or the traits that have been transferred is difficult, and we cannot always predict the [characteristics] that will result. With organisms modified by molecular methods, we are in a better, if not perfect, position to predict the [characteristics].

Yet across the world, the fact of gene splicing alone is typically treated as sufficient reason for extraordinary pre-market testing and other requirements concerning effects on human health and the environment—requirements that are both time-consuming and extremely costly—while development of plants through traditional modes of genetic modification entails no such requirements. Indeed, each year dozens of new plant varieties produced through traditional modes of genetic improvement (e.g., hybridization) enter the marketplace without any prerequisite scientific review and without special labeling. Many such varieties cannot originate naturally; these are results of wide crosses—hybridizations between plants of different species or genera. For example, the relatively new manmade “species” Triticum agropyrotriticum, cultivated for food, is a cross between bread wheat—common wheat (Triticum aestivum)—and a weed sometimes called “quackgrass” or “couch grass” (Agropyron repens). T. agropyrotriticum, which has all of wheat's chromosomes plus an entire quackgrass genome, has been produced independently in Canada, the U.S., France, Germany, China, and the former Soviet Union.

 

In theory at least, problems of several kinds could result from such a construction—the product of introducing tens of thousands of extra genes into a long-standing, widespread plant variety. Such hypothetical problems include plant toxicity, an increase in allergenicity, and invasiveness in the field greater that of either parent plant. Yet regulators have evinced no concern about these possibilities. Regarding gene-spliced agricultural products, they have preferred focusing on hypothetical (and largely imaginary) risks from the process of gene splicing to focusing on risks from the products themselves.

The absence of regulatory-agency surveillance of the use of a technique called “induced-mutation breeding” serves as an illustration of the illogical disparity between the policy of such agencies toward plant gene splicing and their policy toward plant development by analogous means. Induced-mutation breeding, which has been in widespread use since the 1950s, involves subjecting cultivated plants or their seeds to ionizing radiation or toxic chemicals in order to induce genetic mutations, which in this case are random mutations. Such exposures usually either kill the plants or seeds or result in undesirable genetic mutations. Sometimes, however, though rarely, something desirable results from the induced mutation(s)—perhaps an agricultural advantage such as a change in plant length or an increase in seed output or fruit size. In no circumstance of induced-mutation breeding does anyone exactly know which genetic mutation(s) are responsible for any new trait, or what other induced mutations might have occurred in the plant. Yet plants from approximately 1,400 varieties generated by induced-mutation breeding have been marketed in the last half-century without any relevant formal pre-market regulations. Plants of several of these varieties—including two of squash and a variety of potato—were banned from commerce because they contained endogenous toxins at dangerous levels.

What does such inconsistency mean in practice? Having a packetful of seeds x-rayed at a hospital and the irradiated seeds planted in the back yard of one's house does not legally prerequire the homeowner's getting approval from a governmental authority. But anyone conducting gene splicing is required by law to confront a mountain of bureaucratic paperwork and enormous expense and, after the site of the experiment has been published (a legal requirement), is liable to vandalism by anti-progress thugs.

The U.S. Department of Agriculture requirements for paperwork and field trial design make American field trials of gene-spliced organisms 10 to 20 times more expensive than field trials of those of their non-gene-spliced counterparts that have been altered through traditional genetic techniques. Developers of the latter organisms are exempt from regulations of the sort that bedevils developers of gene-spliced organisms, simply because experience with organisms genetically modified without gene splicing spans thousands of years.

Even the old, inexact genetic techniques, whose results are less predictable than those of gene splicing, entail minimal—though, as illustrated above, not zero—risk to human health and the environment.

It is a paradox that the only plants for which exhaustive, repeated pre-commercial review is mandatory result from craftsmanship more accurate than that of the products of traditional genetic techniques. If regulators applied the precautionary principle rationally and fairly, they would pay more precautionary attention to the latter products than to gene-spliced organisms.

Moreover, notwithstanding the assurances of the European Com- mission and some other advocates of the precautionary principle, regulators responsible for gene-spliced products seldom take into account the potential risk-reducing benefits of the technology. Some of the most successful modern developments in agricultural biotech have involved splicing into plants, particularly corn and cotton, a copy of a bacterial gene that codes for a protein toxic to the insects that feed on the plant but nontoxic to any mammal. For example, the gene-spliced corn variety called “Bt corn” is not only more insect-resistant than non-gene-spliced corn; it also yields grain that is less likely to contain fusarium—a fungus that insects often convey into the plant and that is toxic to plants, humans, and other animals. By disabling such insects, the Bt protein restricts fusarium infestation; the protein thus also restricts plant concentrations of a toxin called “fumonisin,” which fusaria produce and whose ingestion can lead to esopha-geal cancer in humans and to fatal diseases in horses and swine. More-over, at harvest, the insect-part concentrations of Bt corn are lower than those of conventional corn vari-eties.

The production of Bt corn is less expensive than the cultivation of non-gene-spliced corn and more environmentally favorable—because Bt corn demands less pesticide application. Plus, Bt corn is of higher quality.

Other gene-spliced products, such as herbicide-resistant crops, have paved the way for reductions in herbicide use and for the adoption of farming practices that are more environmentally favorable. (Herbicide resistance of crops can make breaking up land to prevent weed growth unnecessary.) The current development of high-yield gene-spliced crops promises the opportunity for producing more food per acre worldwide—and thus, for leaving more land for other purposes. And widespread cultivation of plant varieties recently developed through gene splicing to have concentrations of essential nutrients higher than those of their non-gene-spliced counterparts could dramatically improve the health of hundreds of millions of malnourished residents of developing countries. Such potential tangible environmental and public health benefits are treated as of little or no significance in risk calculations whose basis is the precautionary principle.

Regulatory agencies have imposed standards for gene-spliced plants that their non-gene-spliced counterparts could not satisfy. And even when such standards are thoroughly met—necessarily through rigorous analysis and testing—often regulators say they remain unsatisfied. But, in scientific circles at least, there is little or no doubt that the new biotechnology is safe. Both theoretical and empirical evidence shows that gene-spliced organisms are safe and extraordinarily predictable. Gene-spliced plants have been cultivated on more than 100 million acres every year for several years, and more than 60 percent of the processed foods in the U.S. have ingredients derived from gene-spliced organisms. Yet agricultural gene splicing has never resulted in any physical harm to anyone and has never affected any ecosystem adversely.

The Precautionary Principle and Legal Uncertainty

The sordid history of the precautionary principle strongly suggests that it should not shape the development of public policies. The European Commission's abuses of the precautionary principle exemplify that, without a commitment to act logically and responsibly, clarifications and promises have little use. It is astonishing that the EC described its year 2000 communication on the precautionary principle as an attempt to heighten consistency and clarity—yet expressly refused to define the principle. It thereupon stated: “. . . [I]t would be wrong to conclude that the absence of a definition has to lead to legal uncertainty.” Although it is not unusual to rely on regulatory agencies and courts for definitions and elaborations of statutory policies, the EC's refusal to define what purports to be a fundamental principle makes confusion and mischief inevitable; the absence of an EC definition of the precautionary principle subordinates the legal rights of innovators and the legal obligations of regulators to arbitrary decisions of governments and even of particular regulators. In its application, the precautionary principle seldom includes either evidentiary safety standards or procedural criteria for gaining approval from regulatory agencies—irrespective of the quantity and quality of evidence. Thus, without any means of ensuring that their decisions will reduce overall risk—or even that the decisions will be sensible—regulators in effect have carte blanche in deciding on what is “unsafe” and what is “safe enough.”

Unlike how its proponents characterize it, acceptance of the precautionary principle tends to reduce the accountability of governments, because regulators can cite the principle to justify virtually any decision of any regulatory agency. Conferring wide discretion on government regulators often leads to dubious public policies. Economist Milton Friedman has said that individuals and organizations usually act to their own advantage, irrespective of whether or not their actions are in the public interest. Certainly, regulatory-agency policies and case-by-case decisions, on the whole, manifest the truth of that insight. Civil servants and political appointees spend much time and energy on trying to keep themselves out of trouble. On top of this, the arbitrariness and lack of accountability of many functioning systems of regulation foster various patterns of behavior that are contrary to the public interest.

Biased Decision-Making

While the European Union is prominent as a user of the precautionary principle, American regulatory agencies also employ it, on issues ranging from prescription drugs, toxic chemicals, and the new biotechnology to global climate change and gun control. The expression “precautionary principle” is not used in U.S. public policy, but efforts toward regulation of such products as pharmaceuticals, food additives, gene-spliced plants and microorganisms, and pesticides and other chemicals are indubitably consistent with the precautionary principle. Indeed, with respect to risks of some kinds—new medicines, lead in gasoline, and nuclear power, for example—American regulators seem even more approving toward the precautionary principle than are their European peers. This tendency of American regulatory agencies also applies to gene splicing, though not as radically as the European orientation. Regulatory-agency usage of the precautionary principle in the U.S. differs from that in Europe largely in terms of degree and semantics. In both the U.S. and Europe, public health and environmental regulations usually require a risk assessment—an estimation of the extent of and exposure to potential dangers—and a subsequent process of deciding how to attempt regulation of the risk. Following the precautionary principle can, through a systematic bias, pervert this process. Regulators who do so face an unbalanced incentive structure in which addressing potential harms from new products is mandatory but addressing inconspicuous risk-reducing properties of products that are new or in little or no use is not mandatory. The result is a lopsided decision-making process that is intrinsically biased against change and, therefore, against innovation and progress.

To comprehend this, it can be useful to grasp the two basic kinds of mistaken decisions that regulators can make: Type I errors and Type II errors. Approving entry of a harmful product into the marketplace is a Type I error. Prohibiting, delaying, or withdrawing approval for such entry of a therapeutically useful chemical, for example, is a Type II error. The consequences of either error are adverse to the public—but not necessarily to the regulator who commits the error, particularly if the error is of Type II.

Examples of this inconsistency abound in the U.S. and Europe. A classic illustration involves the FDA's licensing, in 1976, of the marketing of the swine flu vaccine. This is generally characterized as a Type I error, because although the vaccine was effective in preventing influenza, one of its side effects was not recognized as such at the time of FDA approval. And that side effect was serious: A small number of patients developed Guillain-Barré syndrome, with temporary paralysis. Mistakes of this kind are conspicuous, and news of them elicits swift responses from the media, the public, and Congress. Both the developers of the product and the regulators who sanctioned its marketing are excoriated and punished in such contexts as congressional hearings, newspaper editorials, and TV newsmagazines. A regulator's approving entry into the marketplace of a harmful high-profile product might spoil his or her career irreparably even if the approval was in good faith. Therefore, regulators often make official decisions in self-defense rather than in the public interest—and thus commit Type II errors.

Former FDA Commissioner Alexander Schmidt correctly summed up the regulator's dilemma:

In all our FDA history, we are unable to find a single instance where a Congressional committee investigated the failure of FDA to approve a new drug. But, the times when hearings have been held to criticize our approval of a new drug have been so frequent that we have not been able to count them. The message to FDA staff could not be clearer. Whenever a controversy of a new drug is resolved by approval of the drug, the agency and the individuals involved likely will be investigated. Whenever such a drug is disapproved, no inquiry will be made. The Congressional action for negative action is, therefore, intense. And it seems to be ever increasing.

Excessive governmental requirements and unreasonable decisions can get a new product “disapproved,” as Schmidt put it, or can delay approval. Unneces-sary or capricious delays are anathema to innovators, dampen competition, and make for ultimate overpricing of the product.

 

A case in point was the FDA's precipitate and excessive response to the 1999 death of a patient in a University of Pennsylvania gene therapy trial for a genetic disease. The cause of the incident had not been identified, and products of the class in question—enfeebled adeno-viruses carrying human DNA intended to rectify an inherited defect—had been used in many patients without causing death, and with serious side effects only in a small percentage of these patients. The FDA overreacted to the 1999 death: Not only did the agency halt the trial during which the patient had died; it also halted (a) all other gene therapy trials at the University of Pennsylvania, (b) similar studies at other universities, and (c) experiments the drug company Schering-Plough had been conducting with adeno-viral preparations (one a treatment for liver cancer, the other for colorectal cancer with liver metastasis). The FDA even halted experiments that didn't involve adenoviruses. Because of these actions—and by excoriating and humiliating, publicly, the re-searchers who had engaged in the trial in question—the FDA cast a pall over the entire field of gene therapy and set back research in that field considerably, perhaps by a decade.

 

Although they can seriously affect the health of the public, the Type II errors of regulatory agencies seldom become objects of public attention. Relatively very few persons ever become well informed about any specific error of this sort. These few include some of the employees of the product's manufacturer and some stock market analysts and investors. And the public almost never becomes aware of any Type II error's precipitating a corporate decision to give up on a product. Pharmaceutical companies are loath to complain publicly about any FDA misjudgment, largely because the agency has much discretionary influence over their ability to test and market products. Consequently, there may be no publicity concerning loss of potential social benefits and culpability for such a loss.

 

Few organizations aggressively publicize Type II errors of regulatory agencies; such organizations include AIDS advocacy groups that scrutinize FDA reviews of certain products. And seldom do Congressional checks on regulatory-agency performance focus on Type II errors. Type I errors tend to make for more exciting hearings than do Type II errors. When Type II errors are exposed to the public, often regulators try to justify them on the basis of “erring on the side of safety”—in effect, by invoking the precautionary principle—a ploy that often plays well to the gallery. Often legislators, the media, and the public rubber-stamp this euphemism, and thus our system for regulating the pharmaceutical industry becomes progressively less responsive to the public interest.

 

The FDA is not unique in this regard, of course. Every regulatory agency in the U.S. is subject to various social and political tensions that make it an object of castigation when a product it has sanctioned proves dangerous (even if the net effect of marketing the product is desirable), but not when it keeps a beneficial product from consumers.

“Cautionless Precaution”

For many activists promoting the precautionary principle, the underlying issue is not genuinely precaution or safety. Most advocates of the precautionary principle are less pro-safety than they are anti-big-business and anti-high-technology. As a group, such activists are consummate opportunists. When they learn, for example, of a scientific finding that sperm counts are de-creasing in any particular population, they blame whichever hobgoblin they want to banish.

A case in point is the world-wide bestseller Our Stolen Fu-ture: Are We Threatening Our Fertility, Intelligence, and Sur-vival?—A Scientific Detective Story (Penguin Books USA, 1996), the bible of upholders of the endocrine disrupter hypothesis. Its central premise—that manmade estrogenlike chemicals unfavorably affect health in various ways—is unconfirmed scientifically, and much of the research cited in favor of the hypothesis has been discredited. Moreover, many of the statements in Our Stolen Future are equivocal, for example: “Those exposed prenatally to endocrine-disrupting chemicals may have abnormal hormone levels as adults, and they could also pass on persistent chemicals they themselves have inherited—both factors that could influence the development of their own children” (emphasis added). The authors further maintain, without any actual evidence, that exposures to many chemicals in small amounts involve synergy—i.e., that the chemicals constitute a witches' brew that is much more damaging than its constituents would suggest. For anti-high-technology ideologues, the mere fact that such questions have been raised demands that inventors, manufacturers of innovations, and/or governmental bodies explore the questions—while the ideologues go on to pursue other irksome plausibilities and raise other purportedly demanding questions.

The cynical behavior of advocates of the precautionary principle is reminiscent of the classic admission of one of Charles Schulz's Peanuts characters: “I love humanity. It's people I can't stand.” On issues ranging from nuclear power to gene-spliced plants, many activists are motivated by a personal, passionate—and parochial—vision of what constitutes a “good society” and how to attain it. One well-known opponent of gene splicing, with the Union of Concerned Scientists, has tried to justify this opposition thusly: “Industrialized countries have few genuine needs for innovative food stuffs [sic], regardless of the method by which they are produced.” She has concluded that “the malnourished homeless” represent a problem whose antidote lies “in resolving income disparities, and educating ourselves to make better choices from among the abundant foods that are available.”

Greenpeace, one of the ranking defenders of the precautionary principle, demands “complete elimination [from] the food supply and the environment” of gene-spliced foods, pure and simple. Greenpeace and other groups advocate and perpetrate vandalism on fields where trials are in progress designed to generate data concerning gene-spliced plants and environmental safety.

Conclusions

Societal risks are of two varieties. One is exemplified by preconceiving a decision that danger is absent, by overlooking unlikely but major potential events, and by lack of means of remediation. The other is exemplified by preconceiving a decision that danger is imminent, by testing for or remediating low-risk problems, and by banning activities whose net effect is desirable. Regulators must try in earnest to balance the two types of societal risks. In practice, allegiance to the precautionary principle precludes such balancing. It has resulted in: (a) the establishment of unscientific, discriminatory policies that inhibit innovation and scientific research; (b) extravagant corporate and governmental expenditures; and (c) undue restrictions on consumer options.

Copyright © 2001 American Council On Science And Health