Whether you like it or hate it you can’t ignore impact factor (IF). Touted by its supporters as a measure of how important a journal is within it field, IF is basically a measure of the average number of citations to an article published in the journal. There are many criticisms of IF—it encourages gaming the system with frivolous citation, bad articles may be highly cited, etc., but it is widely regarded as at least a rough guide for how much exposure a journal will give an author. Although I agree that the calculation of IF is flawed, the principle behind it has some validity—an author ought to publish in journals that will give the widest exposure to the intended audience.
The Danger of Low Impact Journals
The history of science has many bitter disputes over priority, often because one claimant had the idea early but published late. But in a fair number of cases the early publisher was ignored because the article was published in an obscure journal which few in his field read. The novel idea lay dormant and hidden from view until a later author published the same idea in a better read journal and garnered the credit, much to the chagrin of the originator. Therefore, an author should strive not only to publish novel work promptly, but to do so in well-read journals.
All IFs are not Created Equal
On the other hand choosing a target journal solely on the basis of IF is a mistake. Research is highly specialized these days. The highest IF journal might not be as widely read by the specific audience targeted as a lower IF but more specialized journal. Publishing in the lower IF journal might give more exposure here. Also, a high IF journal might have a long delay between submission and publication, whereas a lower IF journal will publish quicker. In today’s environment of electronic search engines and viral transmissions, does the ranking of IF numbers really matter? Assuming the result is important and the journal is reasonably well read, word will spread.
How to Prove that IF Matters?
I suspect the situation with IF ratings is the same as that of college ratings. In America the Ivy League colleges—Harvard, Yale, and their fellows—have the highest reputations as seats of learning. But their reputation may be undeserved. A study was carried out tracking students who were accepted into Ivy League schools but chose to study elsewhere. They did just as well in their careers as those who attended an Ivy League school. I’d love to carry out a similar experiment with publications. Submit 100 papers to Nature, randomly withdraw half of those accepted and publish these in lower IF journals. Then see which papers get the most citations. Unfortunately, this experiment would be unethical and a waste of editors’ time. Too bad. It would be great to have real proof whether IF matters.