Researchers and academicians often complain about the unrelenting pressure to “publish or perish”. The extent to which the quantity of published work, and the extent to which that work has been cited by your peers and others professionals, continues to influence so much of their daily work like funding applications, research study proposals, promotion considerations, and employment applications. All of these have a perceived value based on the published research of the individual applicant.
It’s no wonder, therefore, that some researchers will find the pressure unbearable enough at some point to contemplate bending or breaking the rules and delivering research results that appear to be more groundbreaking than they really are in order to boost their chances of getting published and keeping the “publish or perish” monster at bay for a little while longer.
A Democratic Problem
If lack of reputation, research facilities, and funding are considered to be contributory factors to increased risks of scientific misconduct, it would seem that such cases would predominate in the less prestigious research institutions as researchers feel compelled to keep up with their Ivy League contemporaries. In fact, nothing could be further from the truth:
- Marc Hauser, a Psychology Professor at Harvard University, resigned in 2012 after it was discovered that he had fabricated data and manipulated results in multiple studies.
- Dipak Das, Director of the Cardiovascular Research Center at the University of Connecticut Health Center (UCHC) was found guilty in 2012 of “145 counts of fabrication and falsification of data, involving at least 23 papers and 3 grant applications.”
- Former Penn State University professor and researcher Craig Grimes was sentenced to nearly 3½ years in prison and ordered to repay more than $660,000 to Penn State and two federal grant-funding agencies (NIH and NSF) for research grant fraud.
A Question of Opportunity
Every instance of scientific misconduct is dependent upon the opportunity to commit the act. Whether it’s a large research sample with a correspondingly large dataset, or a multi-year study with multiple updates, the easier it is to intervene in the research process without getting noticed, the greater the temptation will be for a stressed researcher to make a few adjustments in order to ensure that the study is a resounding success. Large studies require large budgets, so it could be argued that more money can definitely increase the risk of more misconduct, but there are measures that can be taken to mitigate that risk.
More Money Should Mean More Oversight
If the research topic warrants a study of a significant size and is able to attract sufficient funding (from multiple sources if necessary), there should be no reason why appropriate measures couldn’t be put in place to ensure close oversight of all data collection and data analysis methodologies. The egos of senior researchers may be notoriously fragile, but accepting frequent process checks and double checks should be perceived as an explicit commitment to research integrity rather than any attempt to cast aspersions on individual personnel.
Why do researchers indulge in scientific misconduct? Is it the “publish or perish” mantra or the need to produce groundbreaking research? Let us know your thoughts in the comments below!
Comments are closed for this post.