In 1995, science writer Gary Taubes warned that the science of epidemiology (tracing the source and causes of disease) was reaching a crisis point. In “Epidemiology Faces Its Limits” (Science, Jul. 14, 1995), Taubes argues that modern epidemiology was in danger of becoming a “pathological science” because it had devolved into a data dredging exercise, mindlessly searching an ever-expanding pool of danger for marginally significant associations unpredicted by any a priori hypothesis. For instance, researchers might discover by sifting through volumes of data on ovarian cancer that women who eat yogurt every day suffer the illness marginally more than non-yogurt eaters do and therefore decide that yogurt is a risk factor for cancer. The future did indeed seem bleak.<?xml:namespace prefix = u1 /><?xml:namespace prefix = o ns = “urn:schemas-microsoft-com:office:office” />
Yet traditional epidemiology had its stout defenders. There were certain rules guaranteeing the sanctity of epidemiology. For instance, while Taubes argued that marginal relative risks were being touted as proof of causation when they could not possibly tell us anything certain, the medical journals did not accept that (a relative risk is a number that describes how much more likely someone with the risk factor is than the general population to suffer the relevant disease. Thus, if yogurt eaters suffer ovarian cancer twice as much as other women, their relative risk is said to be 2). The <?xml:namespace prefix = st1 ns = “urn:schemas-microsoft-com:office:smarttags” />New England Journal of Medicine (NEJM) told Taubes that, “As a general rule of thumb, we are looking at a relative risk of 3 or more [before accepting a paper for publication].” A former statistical consultant backed this up, but said, “If it's a 1.5 relative risk, and it's only one study and even a very good one, you scratch your chin and say maybe.”
Yet in the years since Taubes' article, things have changed somewhat. For instance, a study published in NEJM's equally venerable competitor, the Journal of the American Medical Association, in March 2002 argued that air pollution was strongly associated with early death from lung cancer and cardiovascular disease. The study was extensive. Yet the relative risk was a total of 1.12—nowhere near even the “maybe” level of 1995. The assertion of strong association was breathless in its effrontery.
Now it seems that even the golden rules of epidemiology are being tarnished by the pathological desire to turn epidemiology into a giant blame game. John Brignell of the UK's Number Watch web site spotted something quite momentous in the back of a recent report by the British Medical Association into the supposed negative effects of smoking on human reproductive health.
In 1965, Sir Austin Bradford-Hill CBE DSC FRCP, a giant in the field of epidemiology, published “The Environment and Disease: Association or Causation?” in the Proceedings of the Royal Society of Medicine. He set out rules for establishing cause and effect in medical research that have formed the gold standard for assessing causation ever since. The BMA appears to endorse these rules once again. But at the same time as reprinting them in its report, it has insidiously added riders that effectively reverse their intent.
Here are the rules with their modern riders, directly quoted from the BMA study's appendix A, with emphases added:
Strength of the association
Strong associations are more likely to be causal than weak ones. Weak associations are more likely to be explained by undetected biases. However, this does not rule out the possibility of a weak association being causal.
Consistency of the association
An association is more likely to be causal when a number of similar results emerge from different studies done in different populations. Lack of consistency, however, does not rule out a causal association.
For an exposure to cause an outcome, it must precede the effect.
Is there a biologically plausible mechanism by which the exposure could cause the outcome? The existence of a plausible mechanism may strengthen the evidence for causality; however, lack of such a mechanism may simply reflect limitations in the current state of knowledge.
The observation that an increasing dose of an exposure increases the risk of an outcome strengthens the evidence for causality. Again, however, absence of a dose-response, does not rule out a causal association.
Coherence implies that the association does not conflict with current knowledge about the outcome.
Experimental studies in which changing the level of an exposure is found to change the risk of an outcome provide strong evidence for causality. Such studies may not, however, always be possible, for practical or ethical reasons.”
The effect of these riders is to say that the rules as normally understood shouldn't be thought of as rules. A weak association can be as valid as a strong one. An inconsistent association can be as valid as a consistent one. Lack of a known biological pathway merely exposes our ignorance. A dose-response reaction is an optional extra. And one need not worry about lack of experimental evidence if one doesn't have the resources or if one has an ethical objection. The absurdity of these statements can be made apparent if one applies them to the argument that prayer has provable medical efficacy. Prayer clearly passes the new BMA test for epidemiological causation.
As Richard Lindzen of MIT said apropos of the global warming debate, “Science is a tool of some value. It provides our only way of separating what is true from what is asserted. If we abuse that tool, it will not be available when it is needed.” Epidemiology has been one of the most valuable scientific tools ever devised. Yet it has suffered so much abuse it is barely recognizable. When even the British Medical Association joins in the abuse, who will stand up for science?