For maximum impact, a researcher would like to publish articles in the most prestigious journals
*data in the above figure is only indicative and may or may not represent exact values
How are journals rated? One simple but controversial method is Impact Factor (IF), published by Thomson Reuters, which calculates the average number of citations per article during the previous two years. The more citations the articles receive from other researchers, the more important they are considered to be and the more prestigious the journal that publishes them.
There are some obvious problems with this calculation. Does the number of citations really indicate the significance of an article? Remember the debacle over cold fusion? In the year following its 1989 publication, a supposedly revolutionary paper by Pons and Fleischmann was the most cited paper in the world. But soon afterwards the cold fusion bubble burst—no one could reproduce the reported results. But during the heyday of the cold fusion mania, anyone consulting the IF would conclude that Pons and Fleischmann had written the most important paper of the year and the journal that published their paper was the most prestigious in the world.
Does IF measure the impact of a paper? Or does it measure the “buzz” created by a sensational find, which may be overblown? There are also possibilities of abuse from editors and authors who have found ways to game the system to publish papers that generate high levels of citations. An editor might require an author to insert unnecessary self-citations before accepting the paper. Some authors have a habit of giving courtesy citations to colleagues as a professional favor. On the flip side, some authors vengefully refuse to cite competitors, either through pique or in an attempt to skew IF numbers in their favor. The latter practice goes back at least as far as Isaac Newton, who 300 years before IF, revised one of his manuscripts to delete references to his hated rival Robert Hooke.
Nevertheless IF is here to stay, because the alternatives are no better. How about setting up a team of impartial experts to evaluate journal articles for their quality? But where do you find impartial experts? Will a reviewer be tempted to favor a journal edited by a friend and colleague, or one in which he publishes regularly? What about the specter of bribery? IF dispenses with a small number of elites in favor of letting the scientific community as a whole decide the value of a paper. The community can be fooled in the short run, but gets it right eventually. When Gregor Mendel published his 1866 paper “Experiments on Plant Hybridization,” it was almost ignored by the scientific community and was cited only three times in the next 35 years. During this time IF would have given this paper a near zero rating, but today Mendel’s paper is one of the most consistently cited in genetics and is justly considered a seminal work in the history of the field.