Are Risk Assessment and the Precautionary Principle Equivalent?<?xml:namespace prefix = o ns = “urn:schemas-microsoft-com:office:office” />
By Andrew Apel
Editor, AgBiotech Reporter
Cedar Falls, Iowa
EDITOR’S NOTE: The following paper was prepared for the June 20-21, 2002 International Society of Regulatory Toxicology and Pharmacology workshop on the Precautionary Principle, held in Arlington, Virginia. The paper is reproduced here with the author's permission.
I’m the editor of AgBiotech Reporter, a business publication focusing on biotechnology in agriculture. As many of you know, differences of opinion over the application of the precautionary principle in this field have been controversial to the point of violence.
The question I’ve been posed is whether the precautionary principle is equivalent to risk assessment. My answer is: No. To explain that answer, I’m going argue that risk assessment plays a central role in the imperative to maximize benefits, while the precautionary principle violates it. To do that, I’ll borrow from classical risk analysis, quality management and ethics.
In a consultation on risk communication by the United Nations FAO and WHO, ‘risk assessment’ was defined as “the process that is used to quantitatively or qualitatively estimate and characterize risk.”
This begs a big question: why bother? Humans are naturally risk-averse. Once a risk is identified, why not avoid it altogether? However, that would ignore a basic feature of the human approach to risk. There is only one good reason to take a risk, and that’s in order to achieve a benefit.
Do we then blindly accept there are risks and grasp unthinkingly for a benefit? No, because we’re still risk-averse. Once a risk is assessed, it must be managed. Depending on circumstances, anything less than managing risk could be legally negligent, morally wrong or just plain stupid. In the consultation I quoted from above, ‘Risk management’ was defined as the weighing and selecting of options and implementing controls as appropriate to assure an appropriate level of protection.”
After we’ve assessed a risk and placed it under control, what have we done? Something both remarkable and responsible. We’ve improved the value of a product by reducing the risks associated with its benefits.
Anyone familiar with the role of quality improvement or quality management can probably see where I’m going with this. Risks differ only from defects according to the degree of their severity, and risk assessment and risk management are product quality issues.
After all, what is a risk? The dictionary defines it as “A factor, thing, element, or course involving uncertain danger; a hazard.” What is a defect? The dictionary defines it as “An imperfection that causes inadequacy or failure; a shortcoming.” What is quality? According to Edward Deming, “Quality is a predictable degree of uniformity and dependability, at low cost and suited to the market.”
So in risk assessment and risk management, I submit we’re doing the same thing that quality professionals do, for the same purposes. The fundamentals of quality as a discipline are attributed to two people—Walter Shewhart, who was a statistician at the Western Electric Hawthorne plant in the 1930's and his protégé, Edward Deming. In the system Shewhart eventually developed, defects could be reduced by applying a PLAN – DO – CHECK – ACT system. To reduce defects, the quality analyst must PLAN—decide what action might reduce defects, DO-try out the idea, CHECK-determine that the idea was effective in reducing defects, and finally, ACT to implement the idea.
Some defects may be more dangerous to consumers than others, but whether it’s risk assessment and risk management, or plan-do-check-act, the whole point of the process is to maximize benefits.
Why maximize benefits? The answer is found in the definition of the word ‘benefit:’ “Something that promotes or enhances well-being; an advantage:” Benefits being synonymous with good, it’s apparent that there’s an imperative behind maximizing benefits. If new or improved products or services can offer new or improved benefits, then they should be offered to those who want, and most especially, to those who need them. If the risks of those beneficial products or services can be reduced, then they should be reduced, and both efforts require innovation. This notion has been nicknamed the “technological imperative” and criticized as turning Immanuel Kant’s old dictum, “Ought implies can,” on its head, making it “Can implies ought.” That’s not a sound criticism, because the dictum is perfectly capable of standing on its head. In fact, it does. We’re supposed to maximize our mutual well-being, and this has corollaries in law, religion and ethics. When a discovery discloses a way to increase benefits or reduce the risks of achieving them, a choice must be made. To innovate, or not to innovate, that is the question.
Innovation means novel technology. Sometimes that technology will reduce risks. Sometimes that technology will offer novel benefits. Ideally, it will do both. The value of delivering novel benefits with novel technology is almost obvious by definition, but it’s also been quantified. In his speech accepting the 1987 Nobel prize in economics, Robert Solow noted that “[g]ross output per hour of work in the U. S. economy doubled between 1909 and 1949; and some seven-eighths of that increase could be attributed to “technical change in the broadest sense” and only the remaining eighth could be attributed to conventional increase in capital intensity. Thus technology remains the dominant engine of growth, with human capital investment in second place.”
There are also risks in novel technology, and they represent novel risks, risks that the imperative to maximize benefits requires us to assess and manage.
But that begs another big question: how do we know something is novel? There’s no need to go to the dictionary for the answer. When something is novel, that means it’s different, compared to what is familiar. The closer the comparison, the less novel it will be. There’s probably an epistemological mandate to find the closest comparison, but I won’t go into that now.
Anyhow, the OECD summed up the notion of novelty nicely in what is known as the doctrine of substantial equivalence: “For foods and food components from organisms developed by the application of modern biotechnology, the most practical approach to the determination is to consider whether they are substantially equivalent to analogous food product(s) if such exist….The concept of substantial equivalence embodies the idea that existing organisms used as foods, or as a source of food, can be used as the basis for comparison when assessing the safety of human consumption of a food or food component that has been modified or is new.”
When it comes to novel risks, things aren’t entirely as simple as this statement of the doctrine implies, not only because some things are more novel than others, but also because there are different kinds of “risks.” And here again, notions of quality converge with risk assessment and risk management in the process of maximizing benefits.
First, there are actual risks–as defined earlier, “A factor, thing, element, or course involving uncertain danger; a hazard.” Like defects, risks have names, and determinable frequencies of occurrence. The uncertainty can be quantified, such as the likelihood that you will die as you drive to the grocery store in search of food. Or that your car will die on the way, due to poor maintenance or a manufacturer’s defect. This is where you PLAN and ACT to reduce risks. You assess the risks and manage them. And you only bother doing this if you know in advance that there’s a benefit to be had.
Then there are hypothetical risks–sometimes called “potential risks.” These are not risks that have been shown to exist, but rather risks that may plausibly exist. Being hypotheses, they can be tested scientifically. This is where you check to see if the hypothetical risk exists. And act to reduce the risk, if it’s shown to exist. In other words, quality improvement. Or maximizing benefits. This is also the crux of innovation. It takes innovation to hypothesize a risk, verify it, and act to reduce it, just as it does to hypothesize the benefit of a new product, verify the benefit, and act to produce it.
In the final category are found speculative risks. Sometimes they are called “unknown risks,” or “unknown effects.” They exist in the region that lies beyond scientific knowledge or informed imagination. This is the point where the precautionary principle steps in. According to its Wingspread version, before using a new technology, process, or chemical, or starting a new activity, precautionary measures should be taken. The principle says these measures should be taken even if some cause and effect relationships are not fully established scientifically, but the sole pretext is novelty.
If a risk can’t be hypothesized, it can’t be tested. If it can’t be tested, it can’t be assessed. This means, of course, that the precautionary principle cannot be risk assessment; rather, it’s an assessment in the absence of any demonstrable or hypothetical risk. For this reason, the precautionary principle cannot serve the imperative to maximize benefits.
In fact, it violates the imperative to maximize benefits. The precautionary principle demands precautionary measures whenever unknown risks prove impossible to assess, which of course they always are. There is only one possible precaution to take against the unknown, unassessable risks of innovative benefits, and that is to refuse the benefits.
So, to recap: are the precautionary principle and risk assessment equivalent? No. Risk assessment is a fundamental part of improving quality, be it the quality of products or the quality of life, and plays a central role in the innovation required to maximize benefits. The only virtue of the precautionary principle is the avoidance of risks that are impossible to assess. Its vice is that these risks, which may not even exist, can only be avoided by refusing to improve quality, be it product quality or the quality of life.
Finally, consider the reasonableness of the reverse of the Wingspread version of the precautionary principle: “When an activity has the potential to benefit human health or the environment, it should be implemented with due caution, knowing that some cause and effect relationships cannot be fully established scientifically. In this context the opponent of the activity, rather than the public, should bear the burden of proof.” Doesn’t that sound more reasonable?
The Precautionary Principle (“Wingspread Statement”)
When an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically. In this context the proponent of an activity, rather than the public, should bear the burden of proof.
The process of applying the Precautionary Principle must be open, informed and democratic and must include potentially affected parties. It must also involve an examination of the full range of alternatives, including no action. [End of statement.]
Thus, as formulated here, the principle of precautionary action has 4 parts:
1. People have a duty to take anticipatory action to prevent harm. (As one participant at the Wingspread meeting summarized the essence of the precautionary principle, “If you have a reasonable suspicion that something bad might be going to happen, you have an obligation to try to stop it.”)
2. The burden of proof of harmlessness of a new technology, process, activity, or chemical lies with the proponents, not with the general public.
3. Before using a new technology, process, or chemical, or starting a new activity, people have an obligation to examine “a full range of alternatives” including the alternative of doing nothing.
4. Decisions applying the precautionary principle must be “open, informed, and democratic” and “must include affected parties.”
Risk versus Uncertainty
Risk and uncertainty are often used interchangeably in casual discussion, but they have very different technical meanings. Risk is defined as the variation in potential outcomes to which an associated probability can be assigned. In statistical terms, the distribution of the variable is known, but not the value from the distribution which will be realized. In poker terms, you know the probability of being dealt the ace of spades, but you do not know if the next card that will be dealt is the ace of spades. In sharp contrast, uncertainty is a lack of knowledge concerning the distribution of the variable. Not only do you not know the next card to be dealt, but you may not know how many cards are in the deck, or how many of those cards are aces of spades. In life, as in cards, risk is less of a problem that uncertainty. Because risk is associated with probability, risk can be accommodated through the purchase of insurance or hedging. For example, when a Black Jack dealer is showing an ace, the other players may purchase insurance to protect against the dealer having twenty-one. Given the number of cards and the distribution of those cards, the likelihood the dealer has twenty-one could be calculated. Once calculated, the players would know whether or not it was prudent to purchase insurance. Similarly, you do not know if you will be in an automobile accident next year. But because the probability of being in an accident is known, you can buy insurance to protect against that unfortunate outcome. Uncertainty on the other hand, is the lack of knowledge concerning the probability distribution of future events. This implies that insurance is unavailable to protect against negative outcomes. Therefore, it is essential that the analyst must incorporate uncertainty into the cost-benefit analysis and that the decision maker incorporate uncertainty into the decision process.
2. See Appendix A.
3. “[I]f a facility conducts GE crop research, yet won't disclose the locations to the public, then we must put the precautionary principle in practice and view all its crops as legitimate targets.” Communiqué from a group called ‘Reclaim the Seeds’ following an attack on a USDA research facility on May 21, 2000. See http://www.tao.ca/~ban/500ARrts.htm
4. FAO/WHO Expert Consultation on Risk Communication, http://www.fao.org/WAICENT/FAOINFO/ECONOMIC/ESN/riskcomm/riskcom2.htm#E9E3.
5. Conversation with Charles H. Freiburger, March 2002