A new report from the Union of Concerned Scientists finds “unacceptably large numbers of federal climate scientists [have] personally experienced instances of [political] interference over the past five years.” At a Congressional hearing yesterday, Rep. Issa questioned the statistical validity of the survey, pointing to OMB guidelines that suggest the UCS survey’s response rate was unacceptably low. Roger Pielke Jr doesn’t think this is a problem:
Mr. Issa focused on the statistical power of the survey, which is the wrong way to look at it. The responses were the responses. They are not evidence of a larger population — the responses ARE the population. That being said the UCS supports my own contention that politics and science are inherently intermixed.
I’d agree with him if the UCS had simply presented the absolute numbers of incidents and said this suggests there may be a problem. But they didn’t. They presented it as percentages. That suggests either a) the absolute numbers were small enough to be unimpressive and so had to be disguised (and let’s face it, eg, 75 out of 1600 – 25% of the 297 responses that reported professional objection – doesn’t seem like a widespread problem) or b) they were trying to convey the impression that these large percentages referred to all federal climate scientists. It’s a disingenuous means of presentation.
A second point is that even if you have established absolute numbers, you need some benchmark against which to assess the meaning of those absolutes. If there is a general and to-be-expected level of political interference in government then we need to know whether the current level is above or below that number and if it is above, whether that margin is significant or not. Self-reports from the same people *cannot* establish that benchmark as in previous administrations it may have been other people who received the pressure. So the absolute number actually tells us very little.
The point Rep Issa raised about the OMB guidelines is also important because the OMB recognizes the biases that can creep in with low response rates and requires analyses of the effects on each question asked under a 70 percent response rate. If, for instance, one motivation to respond to the survey was political animus against the administration, then that bias might have led to respondents overstating cases and therefore affecting the reliability of the numbers even as absolutes. A truly scientific survey would have prepared a bias analysis that would have assessed the effect of said biases on the numbers. The UCS people didn’t even attempt this as far as I can see.
In other words, the UCS survey doesn’t provide us with particularly useful evidence in any way, other than evidence that UCS will use “the purity of science,” as it were, as a stick with which to beat the Administration.