Our dependence on research integrity in science is more critical than many are willing to acknowledge. The purity of the scientific method is presented as an uncompromising and self-correcting mechanism that will eventually expose any and all transgressors as subsequent research exposes the academic misconduct of the original authors.
This confidence has, historically, been absolute, to the extent that the system functions on the basis of trust. In the same way that currency still functions without the backing of the gold standard, research still operates on the basis that if the findings came from a reputable institution and were published in a prestigious journal, they must be true. However, while loss of confidence in a currency can lead to a run on the national bank, hyperinflation, and economic disaster, the loss of trust in the integrity of research data can have more far-reaching consequences.
No Safety Net
The word integrity derives from the Latin integer, meaning complete or whole. In other words, for the research process to function on trust, you have to be able to trust the whole process. There is no mechanism by which we can get by with only trusting certain parts and doubting others. In the past we have counted on commonly accepted methodologies, peer reviews, transparency of data, and legislative oversight to act as checks and balances in catching the more flexible interpretations of research. That safety net is no longer working, and we have a litany of scientific misconduct as proof of that.
Every component of the academic research industry appears to have been breached in this broad decline in research integrity: fake studies, fake results, fake peer reviews, fake authors, fake journals, and even fake conferences with slick websites that persuade you to part with attendance fees in order to be a presenter. If we have reached a point where the entire safety net is fake, doesn’t that qualify as a crisis?
In spite of the growing catalog of examples of misconduct, the industry as a whole seems to be settling for an approach of caveat emptor, or buyer beware. If you’re a researcher examining data for your next research project or dissertation, it’s up to you to verify the veracity of the study and the results that were produced. You must do your own investigation of the research team members, the study hypotheses, the sample population, who funded the research, and which institution hosted the study. Was the study ever replicated? If so, good luck finding a journal that published that study. If the replication study was unsuccessful, was there any response from the original team? Did they even provide the original data for that replication study?
An approach of “buyer beware” conveniently keeps the issue at arm’s length without any obligation to support those buyers or to be proactive in addressing any of the issues raised.
This is not an unsolvable problem. With so many creative and inquisitive minds in the same arena, workable solutions could be found:
- Establish a code of ethics with financial penalties for non-compliance.
- Set aside research funding for replication studies to validate research.
- Require full data transparency to facilitate those replication studies.
- Require formal and timely responses to enquiries prompted by failure to replicate study results.
- Require journals to provide sufficient space to publish those replication studies.
- Set-up an arm’s length oversight body for peer reviews to include certification and payment for reviewers.
This is by no means a comprehensive list, and critics will no doubt scream that research data already takes too long to reach the light of day as it is. However, if we can no longer trust that data, does it really matter how long it takes?
Researchers are trapped in the immediacy of “publish or perish,” and for many that pressure helps them to rationalize short cuts and transgressions that allow flawed research to enter the market. Until such actions are met with stern penalties, it is too easy to take the risk and gamble that you won’t get caught.