How Rare is Scientific Misconduct?

  Aug 12, 2015   Enago Academy
  : Publication Stages, Research & Publication Ethics

Misplaced Confidence in Science?

Assessing how rare scientific misconduct might be within the broader scientific research community would be virtually impossible. A misconduct cannot be identified until it has been discovered.

Any attempt to extrapolate data on the issue from a small sample would be equally misleading since you would be assuming that your sample would stand up to scrutiny as being representative of the larger population of research studies. Re-examining every piece of research would be an herculean task and could only be justified on evidence of a growing problem.

The profession of scientific research has always assumed that the principles of good research practice were a given, such that research results can be trusted until given cause to think otherwise. The weakness in that argument is that by the time you are given such cause, it may already be too late!

In Search of the Best Misconduct

In October 2012, the Proceedings of the National Academy of Sciences (PNAS) in the UK published a study of 2,047 research papers on the PubMed database that had been retracted from biomedical and life sciences literature. This was done in order to identify the mix of scientific misconduct versus human error.

It was found that researcher misconduct accounted for more than two-thirds of the retractions.

One of the co-authors of the study, Dr. Arturo Dasadevall, lamented that their findings only represented a conservative estimate of the true scale of scientific misconduct: “The better the counterfeit, the less likely you are to find it – whatever we show, it is an underestimate.”

Just the Tip of the Iceberg?

The PNAS study also commented that the long-term trend seemed to be on the upswing, noting that in 1976, there were only three retractions for misconduct out of 309,000 papers (0.00097%) compared to 83 retractions for misconduct out of 867,700 papers in 2007 (0.0096%).

An apparent ten-fold increase in two decades is the kind of headline that gets attention, but many in the scientific research community are concerned that the true rate is even higher. That the full picture is being withheld by academic journal editors by virtue of their refusal to disclose complete information in retraction notices.

A Call For Transparency

When you consider the amount of money now involved in scientific research and the enormous consequences that can occur when flawed research is added to the body of knowledge, the current paranoia over retractions is understandable.

That being said, however, the state of the retraction process in academic publishing is so disorganized that it is only serving to exacerbate the situation.

Retraction notices are deliberately opaque as a result of both the preoccupation with reputation and the constant threat of litigation. As a result, there is no way of knowing whether the problem is simple human error (sloppy research) or more deliberate misconduct.

The increasing media attention given to scientific misconduct now prompts an assumption that every retraction is due to misconduct, which is inaccurate.

If journal editors are so concerned about reputation, perhaps greater transparency on the reason behind the retraction might help the entire community to get a handle on the frequency of research mistakes as well as the frequency of misconduct so that responses can be developed. Such a commitment to the broader community would, in the long run, be seen as a positive.

The above, of course, sounds great on paper, but in a community where disagreements over the validity of statistical analysis have persisted for decades, the likelihood of heated debates and possible litigation over alleged intent in a specific choice of analysis will prove to be a barrier to transparency for some time to come.


Please register to post a comment.

Please register to post a comment.