The Perils Of Precaution

Miller and Conko Article Published In Policy Review<?xml:namespace prefix = o ns = “urn:schemas-microsoft-com:office:office” />

Environmental and public health activists have clashed with scholars and risk-analysis professionals for decades over the appropriate regulation of various risks, including those from consumer products and manufacturing processes. Underlying the controversies about various specific issues — such as chlorinated water, pesticides, gene-spliced foods, and hormones in beef — has been a fundamental, almost philosophical question: How should regulators, acting as society’s surrogate, approach risk in the absence of certainty about the likelihood or magnitude of potential harm?

Proponents of a more risk-averse approach have advocated a “precautionary principle” to reduce risks and make our lives safer. There is no widely accepted definition of the principle, but in its most common formulation, governments should implement regulatory measures to prevent or restrict actions that raise even conjectural threats of harm to human health or the environment, even though there may be incomplete scientific evidence as to the potential significance of these dangers. Use of the precautionary principle is sometimes represented as “erring on the side of safety,” or “better safe than sorry” — the idea being that the failure to regulate risky activities sufficiently could result in severe harm to human health or the environment, and that “overregulation” causes little or no harm. Brandishing the precautionary principle, environmental groups have prevailed upon governments in recent decades to assail the chemical industry and, more recently, the food industry.

Potential risks should, of course, be taken into consideration before proceeding with any new activity or product, whether it is the siting of a power plant or the introduction of a new drug into the pharmacy. But the precautionary principle focuses solely on the possibility that technologies could pose unique, extreme, or unmanageable risks, even after considerable testing has already been conducted. What is missing from precautionary calculus is an acknowledgment that even when technologies introduce new risks, most confer net benefits — that is, their use reduces many other, often far more serious, hazards. Examples include blood transfusions, mri scans, and automobile air bags, all of which offer immense benefits and only minimal risk.

Several subjective factors can cloud thinking about risks and influence how nonexperts view them. Studies of risk perception have shown that people tend to overestimate risks that are unfamiliar, hard to understand, invisible, involuntary, and/or potentially catastrophic — and vice versa. Thus, they overestimate invisible “threats” such as electromagnetic radiation and trace amounts of pesticides in foods, which inspire uncertainty and fear sometimes verging on superstition. Conversely, they tend to underestimate risks the nature of which they consider to be clear and comprehensible, such as using a chain saw or riding a motorcycle.

These distorted perceptions complicate the regulation of risk, for if democracy must eventually take public opinion into account, good government must also discount heuristic errors or prejudices. Edmund Burke emphasized government’s pivotal role in making such judgments: “Your Representative owes you, not only his industry, but his judgment; and he betrays, instead of serving you, if he sacrifices it to your opinion.” Government leaders should lead; or putting it another way, government officials should make decisions that are rational and in the public interest even if they are unpopular at the time. This is especially true if, as is the case for most federal and state regulators, they are granted what amounts to lifetime job tenure in order to shield them from political manipulation or retaliation. Yet in too many cases, the precautionary principle has led regulators to abandon the careful balancing of risks and benefits — that is, to make decisions, in the name of precaution, that cost real lives due to forgone benefits.

The danger of precaution

The danger in the precautionary principle is that it distracts consumers and policymakers from known, significant threats to human health and diverts limited public health resources from those genuine and far greater risks. Consider, for example, the environmental movement’s campaign to rid society of chlorinated compounds.

By the late 1980s, environmental activists were attempting to convince water authorities around the world of the possibility that carcinogenic byproducts from chlorination of drinking water posed a potential cancer risk. Peruvian officials, caught in a budget crisis, used this supposed threat to public health as a justification to stop chlorinating much of the country’s drinking water. That decision contributed to the acceleration and spread of Latin America’s 1991-96 cholera epidemic, which afflicted more than 1.3 million people and killed at least 11,000.

Activists have since extended their antichlorine campaign to so-called “endocrine disrupters,” or modulators, asserting that certain primarily man-made chemicals mimic or interfere with human hormones (especially estrogens) in the body and thereby cause a range of abnormalities and diseases related to the endocrine system.

The American Council on Science and Health has explored the endocrine disrupter hypothesis and found that while high doses of certain environmental contaminants produce toxic effects in laboratory test animals — in some cases involving the endocrine system — humans’ actual exposure to these suspected endocrine modulators is many orders of magnitude lower. It is well documented that while a chemical administered at high doses may cause cancer in certain laboratory animals, it does not necessarily cause cancer in humans — both because of different susceptibilities and because humans are subjected to far lower exposures to synthetic environmental chemicals.

No consistent, convincing association has been demonstrated between real-world exposures to synthetic chemicals in the environment and increased cancer in hormonally sensitive human tissues. Moreover, humans are routinely exposed through their diet to many estrogenic substances (substances having an effect similar to that of the human hormone estrogen) found in many plants. Dietary exposures to these plant estrogens, or phytoestrogens, are far greater than exposures to supposed synthetic endocrine modulators, and no adverse health effects have been associated with the overwhelming majority of these dietary exposures.

Furthermore, there is currently a trend toward lower concentrations of many contaminants in air, water, and soil — including several that are suspected of being endocrine disrupters. Some of the key research findings that stimulated the endocrine disrupter hypothesis originally have been retracted or are not reproducible. The available human epidemiological data do not show any consistent, convincing evidence of negative health effects related to industrial chemicals that are suspected of disrupting the endocrine system. In spite of that, activists and many government regulators continue to invoke the need for precautionary (over-) regulation of various products, and even outright bans.

Antichlorine campaigners more recently have turned their attacks to phthalates, liquid organic compounds added to certain plastics to make them softer. These soft plastics are used for important medical devices, particularly fluid containers, blood bags, tubing, and gloves; children’s toys such as teething rings and rattles; and household and industrial items such as wire coating and flooring. Waving the banner of the precautionary principle, activists claim that phthalates might have numerous adverse health effects — even in the face of significant scientific evidence to the contrary. Governments have taken these unsupported claims seriously, and several formal and informal bans have been implemented around the world. As a result, consumers have been denied product choices, and doctors and their patients deprived of life-saving tools.

In addition to the loss of beneficial products, there are more indirect and subtle perils of government overregulation established in the name of the precautionary principle. Money spent on implementing and complying with regulation (justified or not) exerts an “income effect” that reflects the correlation between wealth and health, an issue popularized by the late political scientist Aaron Wildavsky. It is no coincidence, he argued, that richer societies have lower mortality rates than poorer ones. To deprive communities of wealth, therefore, is to enhance their risks.

Wildavsky’s argument is correct: Wealthier individuals are able to purchase better health care, enjoy more nutritious diets, and lead generally less stressful lives. Conversely, the deprivation of income itself has adverse health effects — for example an increased incidence of stress-related problems including ulcers, hypertension, heart attacks, depression, and suicides.

It is difficult to quantify precisely the relationship between mortality and the deprivation of income, but academic studies suggest, as a conservative estimate, that every $7.25 million of regulatory costs will induce one additional fatality through this “income effect.” The excess costs in the tens of billions of dollars required annually by precautionary regulation for various classes of consumer products would, therefore, be expected to cause thousands of deaths per year. These are the real costs of “erring on the side of safety.” The expression “regulatory overkill” is not merely a figure of speech.

Rationalizing precaution

During the past few years, skeptics have begun more actively to question the theory and practice of the precautionary principle. In response to those challenges, the European Commission (ec), a prominent advocate of the precautionary principle, last year published a formal communication to clarify and to promote the legitimacy of the concept. The ec resolved that, under its auspices, precautionary restrictions would be “proportional to the chosen level of protection,” “non-discriminatory in their application,” and “consistent with other similar measures.” The commission also avowed that ec decision makers would carefully weigh “potential benefits and costs.” ec Health Commissioner David Byrne, repeating these points last year in an article on food and agriculture regulation in European Affairs, asked rhetorically, “How could a Commissioner for Health and Consumer Protection reject or ignore well-founded, independent scientific advice in relation to food safety?”

Byrne should answer his own question: The ongoing dispute between his European Commission and the United States and Canada over restrictions on hormone-treated beef cattle is exactly such a case of rejecting or ignoring well-founded research. The ec argued that the precautionary principle permits restriction of imports of U.S. and Canadian beef from cattle treated with certain growth hormones.

In their rulings, a wto dispute resolution panel and its appellate board both acknowledged that the general “look before you leap” sense of the precautionary principle could be found within wto agreements, but that its presence did not relieve the European Commission of its obligation to base policy on the outcome of a scientific risk assessment. And the risk assessment clearly favored the U.S.-Canadian position. A scientific committee assembled by the wto dispute resolution panel found that even the scientific studies cited by the ec in its own defense did not indicate a safety risk when the hormones in question were used in accordance with accepted animal husbandry practices. Thus, the wto ruled in favor of the United States and Canada because the European Commission had failed to demonstrate a real or imminent harm. Nevertheless, the ec continues to enforce restrictions on hormone-treated beef, a blatantly unscientific and protectionist policy that belies the commission’s insistence that the precautionary principle will not be abused.

Precaution meets biotech

Perhaps the most egregious application by the European Commission of the precautionary principle is in its regulation of the products of the new biotechnology, or gene-splicing. By the early 1990s, many of the countries in Western Europe, as well as the ec itself, had erected strict rules regarding the testing and commercialization of gene-spliced crop plants. In 1999, the European Commission explicitly invoked the precautionary principle in establishing a moratorium on the approval of all new gene-spliced crop varieties, pending approval of an even more strict eu-wide regulation.

Notwithstanding the ec’s promises that the precautionary principle would not be abused, all of the stipulations enumerated by the commission have been flagrantly ignored or tortured in its regulatory approach to gene-spliced (or in their argot, “genetically modified” or “gm”) foods. Rules for gene-spliced plants and microorganisms are inconsistent, discriminatory, and bear no proportionality to risk. In fact, there is arguably inverse proportionality to risk, in that the more crudely crafted organisms of the old days of mutagenesis and gene transfers are subject to less stringent regulation than those organisms more precisely crafted by biotech. This amounts to a violation of a cardinal principle of regulation: that the degree of regulatory scrutiny should be commensurate with risk.

Dozens of scientific bodies — including the U.S. National Academy of Sciences (nas), the American Medical Association, the uk’s Royal Society, and the World Health Organization — have analyzed the oversight that is appropriate for gene-spliced organisms and arrived at remarkably congruent conclusions: The newer molecular techniques for genetic improvement are an extension, or refinement, of earlier, far less precise ones; adding genes to plants or microorganisms does not make them less safe either to the environment or to eat; the risks associated with gene-spliced organisms are the same in kind as those associated with conventionally modified organisms and unmodified ones; and regulation should be based upon the risk-related characteristics of individual products, regardless of the techniques used in their development.

An authoritative 1989 analysis of the modern gene-splicing techniques published by the nas’s research arm, the National Research Council, concluded that “the same physical and biological laws govern the response of organisms modified by modern molecular and cellular methods and those produced by classical methods,” but it went on to observe that gene-splicing is more precise, circumscribed, and predictable than other techniques.

[Gene-splicing] methodology makes it possible to introduce pieces of DNA, consisting of either single or multiple genes, that can be defined in function and even in nucleotide sequence. With classical techniques of gene transfer, a variable number of genes can be transferred, the number depending on the mechanism of transfer; but predicting the precise number or the traits that have been transferred is difficult, and we cannot always predict the [characteristics] that will result. With organisms modified by molecular methods, we are in a better, if not perfect, position to predict the [characteristics].

In other words, gene-splicing technology is a refinement of older, less precise techniques, and its use generates less uncertainty. But for gene-spliced plants, both the fact and degree of regulation are determined by the production methods — that is, if gene-splicing techniques have been used, the plant is immediately subject to extraordinary pre-market testing requirements for human health and environmental safety, regardless of the level of risk posed. Throughout most of the world, gene-spliced crop plants such as insect-resistant corn and cotton are subject to a lengthy and hugely expensive process of mandatory testing before they can be brought to market, while plants with similar properties but developed with older, less precise genetic techniques are exempt from such requirements.

Dozens of new plant varieties produced through hybridization and other traditional methods of genetic improvement enter the marketplace each year without any scientific review or special labeling. Many such products are from “wide crosses,” hybridizations in which large numbers of genes are moved from one species or one genus to another to create a plant variety that does not and cannot exist in nature. For example, Triticum agropyrotriticum is a relatively new man-made “species” which resulted from combining genes from bread wheat and a grass sometimes called quackgrass or couchgrass. Possessing all the chromosomes of wheat and one extra whole genome from the quackgrass, T. agropyrotriticum has been independently produced in the former Soviet Union, Canada, the United States, France, Germany, and China. It is grown for both animal feed and human consumption.

At least in theory, several kinds of problems could result from such a genetic construction, one that introduces tens of thousands of foreign genes into an established plant variety. These include the potential for increased invasiveness of the plant in the field, and the possibility that quackgrass-derived proteins could be toxic or allergenic. But regulators have evinced no concern about these possibilities. Instead, they have concentrated on the use of gene-splicing techniques as such — the very techniques that scientists agree have improved precision and predictability.

Another striking example of the disproportionate regulatory burden borne only by gene-spliced plants involves a process called induced-mutation breeding, which has been in common use since the 1950s. This technique involves exposing crop plants to ionizing radiation or toxic chemicals to induce random genetic mutations. These treatments most often kill the plants (or seeds) or cause detrimental genetic changes, but on rare occasions, the result is a desirable mutation — for example, one producing a new trait in the plant that is agronomically useful, such as altered height, more seeds, or larger fruit. In these cases, breeders have no real knowledge of the exact nature of the genetic mutation(s) that produced the useful trait, or of what other mutations might have occurred in the plant. Yet the approximately 1,400 mutation-bred plant varieties from a range of different species that have been marketed over the past half century have been subject to no formal regulation before reaching the market — even though several, including two varieties of squash and one of potato, have contained dangerous levels of endogenous toxins and had to be banned afterward.

What does this regulatory inconsistency mean in practice? If a student doing a school biology project takes a packet of “conventional” tomato or pea seeds to be irradiated at the local hospital x-ray suite and plants them in his backyard in order to investigate interesting mutants, he need not seek approval from any local, national, or international authority. However, if the seeds have been modified by the addition of one or a few genes via gene-splicing techniques — and even if the genetic change is merely to remove a gene — this would-be Mendel faces a mountain of bureaucratic paperwork and expense (to say nothing of the very real possibility of vandalism, since the site of the experiment must be publicized and some opponents of biotech are believers in “direct action”). The same would apply, of course, to professional agricultural scientists in industry and academia. In the United States, Department of Agriculture requirements for paperwork and field trial design make field trials with gene-spliced organisms 10 to 20 times more expensive than the same experiments with virtually identical organisms that have been modified with conventional genetic techniques.

Why are new genetic constructions crafted with these older techniques exempt from regulation, from the dirt to the dinner plate? Why don’t regulatory regimes require that new genetic variants made with older techniques be evaluated for increased weediness or invasiveness, or for new allergens that could show up in food? The answer is based on millennia of experience with genetically improved crop plants from the era before gene-splicing: Even the use of relatively crude and unpredictable genetic techniques for the improvement of crops and microorganisms poses minimal — but, as noted above, not zero — risk to human health and the environment.

If the proponents of the precautionary principle were applying it rationally and fairly, surely greater precaution would be appropriate not to gene-splicing but to the cruder, less precise, less predictable “conventional” forms of genetic modification. Furthermore, in spite of the assurance of the European Commission and other advocates of the precautionary principle, regulators of gene-spliced products seldom take into consideration the potential risk-reducing benefits of the technology. For example, some of the most successful of the gene-spliced crops, especially cotton and corn, have been constructed by splicing in a bacterial gene that produces a protein toxic to predatory insects, but not to people or other mammals. Not only do these gene-spliced corn varieties repel pests, but grain obtained from them is less likely to contain Fusarium, a toxic fungus often carried into the plants by the insects. That, in turn, significantly reduces the levels of the fungal toxin fumonisin, which is known to cause fatal diseases in horses and swine that eat infected corn, and esophageal cancer in humans. When harvested, these gene-spliced varieties of grain also end up with lower concentrations of insect parts than conventional varieties. Thus, gene-spliced corn is not only cheaper to produce but yields a higher quality product and is a potential boon to public health. Moreover, by reducing the need for spraying chemical pesticides on crops, it is environmentally friendly.

Other products, such as gene-spliced herbicide-resistant crops, have permitted farmers to reduce their herbicide use and to adopt more environment-friendly no-till farming practices. Crops now in development with improved yields would allow more food to be grown on less acreage, saving more land area for wildlife or other uses. And recently developed plant varieties with enhanced levels of vitamins, minerals, and dietary proteins could dramatically improve the health of hundreds of millions of malnourished people in developing countries. These are the kinds of tangible environmental and health benefits that invariably are given little or no weight in precautionary risk calculations.

In spite of incontrovertible benefits and greater predictability and safety of gene-spliced plants and foods, regulatory agencies have regulated them in a discriminatory, unnecessarily burdensome way. They have imposed requirements that could not possibly be met for conventionally bred crop plants. And, as the European Commission’s moratorium on new product approvals demonstrates, even when that extraordinary burden of proof is met via monumental amounts of testing and evaluation, regulators frequently declare themselves unsatisfied.

Biased decision making

While the european union is a prominent practitioner of the precautionary principle on issues ranging from toxic substances and the new biotechnology to climate change and gun control, U.S. regulatory agencies also commonly practice excessively precautionary regulation. The precise term of art “precautionary principle” is not used in U.S. public policy, but the regulation of such products as pharmaceuticals, food additives, gene-spliced plants and microorganisms, synthetic pesticides, and other chemicals is without question “precautionary” in nature. U.S. regulators actually appear to be more precautionary than the Europeans towards several kinds of risks, including the licensing of new medicines, lead in gasoline, nuclear power, and others. They have also been highly precautionary towards gene-splicing, although not to the extremes of their European counterparts. The main difference between precautionary regulation in the United States and the use of the precautionary principle in Europe is largely a matter of degree — with reference to products, technologies, and activities — and of semantics.

In both the United States and Europe, public health and environmental regulations usually require a risk assessment to determine the extent of potential hazards and of exposure to them, followed by judgments about how to regulate. The precautionary principle can distort this process by introducing a systematic bias into decision making. Regulators face an asymmetrical incentive structure in which they are compelled to address the potential harms from new products, but are free to discount the hidden risk-reducing properties of unused or underused ones. The result is a lopsided process that is inherently biased against change and therefore against innovation.

To see why, one must understand that there are two basic kinds of mistaken decisions that a regulator can make: First, a harmful product can be approved for marketing — called a Type I error in the parlance of risk analysis. Second, a useful product can be rejected or delayed, can fail to achieve approval at all, or can be inappropriately withdrawn from the market — a Type II error. In other words, a regulator commits a Type I error by permitting something harmful to happen and a Type II error by preventing something beneficial from becoming available. Both situations have negative consequences for the public, but the outcomes for the regulator are very different.

Examples of this Type I-Type II error dichotomy in both the U.S. and Europe abound, but it is perhaps illustrated most clearly in the fda’s approval process for new drugs. A classic example is the fda’s approval in 1976 of the swine flu vaccine — generally perceived as a Type I error because while the vaccine was effective at preventing influenza, it had a major side effect that was unknown at the time of approval: A small number of patients suffered temporary paralysis from Guillain-Barré Syndrome. This kind of mistake is highly visible and has immediate consequences: The media pounce and the public and Congress are roused, and Congress takes up the matter. Both the developers of the product and the regulators who allowed it to be marketed are excoriated and punished in such modern-day pillories as congressional hearings, television newsmagazines, and newspaper editorials. Because a regulatory official’s career might be damaged irreparably by his good-faith but mistaken approval of a high-profile product, decisions are often made defensively — in other words, above all to avoid Type I errors.

Former FDA Commissioner Alexander Schmidt aptly summarized the regulator’s dilemma:

In all our fda history, we are unable to find a single instance where a Congressional committee investigated the failure of fda to approve a new drug. But, the times when hearings have been held to criticize our approval of a new drug have been so frequent that we have not been able to count them. The message to fda staff could not be clearer. Whenever a controversy over a new drug is resolved by approval of the

drug, the agency and the individuals involved likely will be investigated. Whenever such a drug is disapproved, no inquiry will be made. The Congressional pressure for negative action is, therefore, intense. And it seems to be ever increasing.

Type II errors in the form of excessive governmental requirements and unreasonable decisions can cause a new product to be “disapproved,” in Schmidt’s phrase, or to have its approval delayed. Unnecessary or capricious delays are anathema to innovators, and they lessen competition and inflate the ultimate price of the product. Consider the fda’s precipitate response to the 1999 death of a patient in a University of Pennsylvania gene therapy trial for a genetic disease. The cause of the incident had not been identified and the product class (a preparation of the needed gene, encased in an enfeebled adenovirus that would then be administered to the patient) had been used in a large number of patients, with no fatalities and serious side effects in only a small percentage of patients. But given the high profile of the incident, regulators acted disproportionately. They not only stopped the trial in which the fatality occurred and all the other gene-therapy studies at the same university, but also halted similar studies at other universities, as well as experiments using adenovirus being conducted by the drug company Schering-Plough — one for the treatment of liver cancer, the other for colorectal cancer that had metastasized to the liver. By these actions, and by publicly excoriating and humiliating the researchers involved (and halting experiments of theirs that did not even involve adenovirus), the fda cast a pall over the entire field of gene therapy, setting it back perhaps as much as a decade.

Although they can dramatically compromise public health, Type II errors caused by a regulator’s bad judgment, timidity, or anxiety seldom gain public attention. It may be only the employees of the company that makes the product and a few stock market analysts and investors who are knowledgeable about unnecessary delays. And if the regulator’s mistake precipitates a corporate decision to abandon the product, cause and effect are seldom connected in the public mind. Naturally, the companies themselves are loath to complain publicly about a mistaken fda judgment, because the agency has so much discretionary control over their ability to test and market products. As a consequence, there may be no direct evidence of, or publicity about, the lost societal benefits, to say nothing of the culpability of regulatory officials.

Exceptions exist, of course. A few activists, such as the aids advocacy groups that closely monitor the fda, scrutinize agency review of certain products and aggressively publicize Type II errors. In addition, congressional oversight should provide a check on regulators’ performance, but as noted above by former fda Commissioner Schmidt, only rarely does oversight focus on their Type II errors. Type I errors make for more dramatic hearings, after all, including injured patients and their family members. And even when such mistakes are exposed, regulators frequently defend Type II errors as erring on the side of caution — in effect, invoking the precautionary principle — as they did in the wake of the University of Pennsylvania gene therapy case. Too often this euphemism is accepted uncritically by legislators, the media, and the public, and our system of pharmaceutical oversight becomes progressively less responsive to the public interest.

The fda is not unique in this regard, of course. All regulatory agencies are subject to the same sorts of social and political pressures that cause them to be castigated when dangerous products accidentally make it to market (even if, as is often the case, those products produce net benefits) but to escape blame when they keep beneficial products out of the hands of consumers. Adding the precautionary principle’s bias against new products into the public policy mix further encourages regulators to commit Type II errors in their frenzy to avoid Type I errors. This is hardly conducive to enhancing overall public safety.

Extreme precaution

For some antitechnology activists who push the precautionary principle, the deeper issue is not really safety at all. Many are more antibusiness and antitechnology than they are pro-safety. And in their mission to oppose business interests and disparage technologies they don’t like or that they have decided we just don’t need, they are willing to seize any opportunity that presents itself.

These activists consistently (and intentionally) confuse plausibility with provability. Consider, for example, Our Stolen Future, the bible of the proponents of the endocrine disrupter hypothesis discussed above. The book’s premise — that estrogen-like synthetic chemicals damage health in a number of ways — is not supported by scientific data. Much of the research offered as evidence for its arguments has been discredited. The authors equivocate wildly: “Those exposed prenatally to endocrine-disrupting chemicals may have abnormal hormone levels as adults, and they could also pass on persistent chemicals they themselves have inherited — both factors that could influence the development of their own children [emphasis added].” The authors also assume, in the absence of any actual evidence, that exposures to small amounts of many chemicals create a synergistic effect — that is, that total exposure constitutes a kind of witches’ brew that is far more toxic than the sum of the parts. For these anti-innovation ideologues, the mere fact that such questions have been asked requires that inventors or producers expend time and resources answering them. Meanwhile, the critics move on to yet another frightening plausibility and still more questions. No matter how outlandish the claim, the burden of proof is put on the innovator.

Whether the issue is environmental chemicals, nuclear power, or gene-spliced plants, many activists are motivated by their own parochial vision of what constitutes a “good society” and how to achieve it. One prominent biotechnology critic at the Union of Concerned Scientists rationalizes her organization’s opposition to gene-splicing as follows: “Industrialized countries have few genuine needs for innovative food stuffs, regardless of the method by which they are produced”; therefore, society should not squander resources on developing them. She concludes that although “the malnourished homeless” are, indeed, a problem, the solution lies “in resolving income disparities, and educating ourselves to make better choices from among the abundant foods that are available.”

Greenpeace, one of the principal advocates of the precautionary principle, offered in its 1999 irs filings the organization’s view of the role in society of safer, more nutritious, higher-yielding, environment-friendly, gene-spliced plants: There isn’t any. By its own admission, Greenpeace’s goal is not the prudent, safe use of gene-spliced foods or even their mandatory labeling, but rather these products’ “complete elimination [from] the food supply and the environment.” Many of the groups, such as Greenpeace, do not stop at demanding illogical and stultifying regulation or outright bans on product testing and commercialization; they advocate and carry out vandalism of the very field trials intended to answer questions about environmental safety.

Such tortured logic and arrogance illustrate that the metastasis of the precautionary principle generally, as well as the pseudocontroversies over the testing and use of gene-spliced organisms in particular, stem from a social vision that is not just strongly antitechnology, but one that poses serious challenges to academic, commercial, and individual freedom.

The precautionary principle shifts decision making power away from individuals and into the hands of government bureaucrats and environmental activists. Indeed, that is one of its attractions for many ngos. Carolyn Raffensperger, executive director of the Science and Environmental Health Network, a consortium of radical groups, asserts that discretion to apply the precautionary principle “is in the hands of the people.” According to her, this devolution of power is illustrated by violent demonstrations against economic globalization such as those in Seattle at the 1999 meeting of the World Trade Organization. “This is [about] how they want to live their lives,” Raffensperger said.

To be more precise, it is about how small numbers of vocal activists want the rest of us to live our lives. In other words, the issue here is freedom and its infringement by ideologues who disapprove, on principle, of a certain technology, or product, or economic system.

The theme underlying the antitechnology activism of today is not new. It resonates well with historian Richard Hofstadter’s classic analysis half a century ago of religious and political movements in American public policy, The Paranoid Style in American Politics. Hofstadter summarized the religious and political activists’ paranoia this way: “The central image is that of a vast and sinister conspiracy, a gigantic and yet subtle machinery of influence set in motion to undermine and destroy a way of life.” He goes on to note a characteristic “leap in imagination that is always made at some critical point in the recital of events.” Susanne Huttner, associate vice provost for research of the University of California system, has placed biotechnology critics squarely in Hofstadter’s sights. Viewed from Hofstadter’s model of the paranoid style, she has observed that the “conspiracy” here lies in large-scale agriculture performed with twenty-first century technology, and the “leap in imagination” lies in the assertion that biotechnology is at base bad for agriculture, farmers, and developing nations.

But can these generalizations apply to all biotechnologies? What about veterinary diagnostics and vaccines? Plants resistant to disease, insects, anddrought? Grains with enhanced nutrient content? Fruits that act as vaccines and can immunize inhabitants of developing countries against lethal and hugely prevalent infectious diseases?

Precaution v. freedom

History offers compelling reasons to be cautious about societal risks, to be sure. These include the risk of incorrectly assuming the absence of danger (false negatives), overlooking low probability but high impact events in risk assessments, the danger of long latency periods before problems become apparent, and the lack of remediation methods in the event of an adverse event. Conversely, there are compelling reasons to be wary of excessive precaution, including the risk of too eagerly detecting a nonexistent danger (false positives), the financial cost of testing for or remediating low-risk problems, the opportunity costs of forgoing net-beneficial activities, and the availability of a contingency regime in case of an adverse event. The challenge for regulators is to balance these competing risk scenarios in a way that reduces overall harm to public health. This kind of risk balancing is often conspicuously absent from precautionary regulation.

It is also important that regulators take into consideration the degree of restraint generally imposed by society on individuals’ and companies’ freedom to perform legitimate activities (e.g., scientific research). In Western democratic societies, we enjoy long traditions of relatively unfettered scientific research and development, except in the very few cases where bona fide safety issues are raised. Traditionally, we shrink from permitting small, authoritarian minorities to dictate our social agenda, including what kinds of research are permissible and which technologies and products should be available in the marketplace.

Application of the precautionary principle has already elicited unscientific, discriminatory policies that inflate the costs of research, inhibit the development of new products, divert and waste resources, and restrict consumer choice. The excessive and wrong-headed regulation of the new biotechnology is one particularly egregious example. Further encroachment of precautionary regulation into other areas of domestic and international health and safety standards will create a kind of “open sesame” that government officials could invoke whenever they wish arbitrarily to introduce new barriers to trade, or simply to yield disingenuously to the demands of antitechnology activists. Those of us who both value the freedom to perform legitimate research and believe in the wisdom of market processes must not permit extremists acting in the name of “precaution” to dictate the terms of the debate.

Copyright © 2001 Policy Review