Academic journals are coming under increased scrutiny for the actions they take to manage their rejection rates and their impact factors. Even as we embrace the broader impact measurements of article-level metrics (altmetrics), journal impact factor continues to impact the more established bibliometrics, i.e., citations, in particular.
The more citations a journal is able to generate from each issue, the higher the impact factor. For editors who are open to ways of “juicing” that number, adding citations to an article under the guise of the peer review process is a remarkably deft way of manipulating the number of citations. Similar tactics have earned the label of coercive citation, where an editor makes the addition of extra citations (often with tenuous relevance to the topic of the paper) a requirement for publication.
A Power Play
As an early stage researcher, your primary aim is to build a track record of publications, and journal editors often exercise considerable control on the articles on how and which articles get published in each issue. So, how does citation stacking affect journals and authors? Often, it has been observed that small, highly specialized journals have higher self-citation rates compared to larger journals as self-citations helps improve the impact factor. This practice is quite damaging to the overall system as it degrades the entire metric on which journals quality and the science that it is publishes is rated.
How are Citations Stacked?
When a family of journals owned by one publishing company takes the coercive citation model to the extreme, we meet a relatively new tactic called citation stacking. Since there are often multiple journals in the family, they are able to cite both themselves and each other to aggressively promote their respective impact factors. In Thomson Reuters’ (TR’s) 2013 Journal Citation Report (JCR), 37 journals were suppressed for questionable citation activity, and 66 journals were banned completely for citation stacking after TR’s new algorithm flagged the anomaly.
Is It Justifiable?
In a specific case in 2013, four Brazilian journals were banned from JCR for one year after it was discovered that the editors had colluded in the publication of a series of articles designed to boost the rankings of their respective journals. The editor of Clinics justified the behavior based on frustrations with the system giving less attention (and by definition lower impact factor rankings) to the credible research by Brazilian scientists who were only getting published in Brazilian journals.
JCR tracks over 10,800 journals, which puts the 66 banned journals and 37 flagged journals to less than one percent. However, as long as we continue to give so much weight to the volume of citations as a measure of impact, the temptation to boost that number will be too tempting to ignore for journals and publishers.