A Barrage Of Legal Threats Shuts Down Whistleblower Site, Science Fraud
Those of us concerned about the decaying credibility of Big Science were dismayed to learn that the whistleblower site Science Fraud has been shut down due to a barrage of legal threats against its operator. With billions of dollars in federal science funding hinging on the integrity of academic researchers, and billions more in health care dollars riding on the truthfulness of pharmaceutical research claims, the industry needs more websites like this, not fewer.
Regular readers of Retraction Watch, a watchdog site run by two medical reporters, got the news along with a story about the blog’s anonymous editor, who has since come forward and identified himself as Professor Paul Brookes, a researcher at the University of Rochester. Operated as a crowdsourced reference site much like Wikipedia, Science Fraud, in its six months of operation, documented egregiously suspicious research results published in over 300 peer reviewed publications. Many were subsequently retracted, including a paper by an author whose lawyer sent Science Fraud a cease and desist letter.
Given the tens of millions of dollars in misappropriated research funds that financed this small sample of what is surely a larger problem and the cascading pollution of the scientific literature whenever fraudulent publications get cited, it’s a shame that this tip-of-of-the-iceberg effort at cleansing the muck is being shut down rather than expanded.
Attempts to interview Dr. Brookes were not successful. One can just imagine what he is going through for the sin of trying to shine a light into the dark corners of the guild that controls the flow of money, tenure, prestige, and publications in the insular world of Big Science. While pressure to publish is higher than ever given the aforementioned financial and non-financial rewards conferred on scientists claiming breakthrough results, nothing excuses the ethical transgressions Science Fraud was designed to uncover.
Fraud, plagiarism, cherry-picked results, poor or non-existent controls, confirmation bias, opaque, missing, or unavailable data, and stonewalling when questioned have gone from being rare to being everyday occurrences. Just look at the soaring retraction level across multiple scientific publications and the increasingly vocal hand wringing of science vigilantes. Hardly a prestigious university or large pharmaceutical company is immune, with the likes of Harvard, Cal Tech, Johns Hopkins, Ohio State, University of Kentucky, and the University of Maryland recently fingered by Retraction Watch.
And if you think science fraud only impacts the scientific literature, consider the horrendous case of Dr. Scott Reuben, formerly chief of the acute pain service at Baystate Medical Center in Massachusetts. He was sentenced to prison for falsifying research data purportedly demonstrating the efficacy of analgesic medications sold by Pfizer, Merck, and Wyeth that were published in dozens of journals before his fabrications were uncovered. And while Reuben is through as a scientist the problem lingers on, as his research papers were among the most heavily cited in the field.
When I first began looking into the increasingly vexing problem of irreproducible scientific research I assumed that the bulk of the problem was caused by sloppy science. Not so, says a National Academy of Sciences study that attributes two thirds of the retractions in the biomedical and life-sciences to scientific misconduct. And remember, these are only the people that have gotten caught.
In fact, it’s amazing that anyone gets caught at all. While the U.S. Office of Research Integrity (ORI), part of the Department of Health and Human Services, is chartered with rooting out science fraud, investigators must rely on allegations submitted by scientists in the field. And yet consider the consequences to the career of any whistleblower. How many graduate students are likely to turn in their Principal Investigator (PI) knowing that this would dash their hopes of earning a Ph.D.? How many post-docs would do the same, throwing away their chance for a faculty appointment? How many assistant professors would risk receiving tenure by outing a colleague? And how many PIs would be willing to wade into a controversy by bringing charges against the very same peers who review their publications and grant proposals? It isn’t hard to see how this can lead to a culture of omerta (though without worrying about a visit from Luca Brasi).
Conspiracy theory? I have personally spoken to young graduate students asked to review papers on behalf of their PIs who detected falsified data, usually by noticing identical noise floors in two different readings – a statistical impossibility. They were told to keep quiet about it. These fraudulent results are now part of the scientific literature. Every time I write a column like this I get email from more of them, none of whom will come forward for the reasons outlined above.
Something needs to be done to change the culture to make it easier to root out the bad apples. Too much is at stake to let this go—not just because of the research dollars wasted or the misguided public policy that might result, but because bad science threatens to mislead the vast majority of good scientists who wouldn’t dream of doctoring their results.
The change will come not from public policy, but from the conscientious action of brave individuals. If you witness science fraud and you don’t speak out, consider yourself part of the problem. Meanwhile, a proposal is being drafted to establish a non‐profit foundation, the Association for Anonymous Post‐Publication Peer Review (AAPPR), whose purpose will be to continue the mission of Science Fraud under the auspices of an open, properly managed governance structure. Stay tuned as the story develops.