1

What is the Best Way to Rank Academic Journals?

All Ranking is Relative

If we are to discuss a better way to rank academic journals, we must first address who will be benefiting from that ranking system. As the industry stands currently, different stakeholders want different things.

Journal editors and editorial boards want prestige to justify high subscription fees and to ensure that their journal is the first choice for submission of all new research in their field:

  • Eigenfactor measures how many people read a journal. This is useful for librarians looking to make subscription purchases on a limited budget, but doesn’t help a researcher looking to select the journal that will get his particular article the most readers.
  • Impact Factor tracks the number of citations of an individual article that would help the researcher compare the similarity of articles as a guesstimate.
  • SCImago journal rank attempts to deliver the best of both worlds by combining the number of citations with the level of prestige of the journals making those citations using a proprietary algorithm.

The Danger of Too Many Assumptions

All of these ranking mechanisms seem to have been accepted on a “better than nothing” standard. There is no conclusive evidence that all citations are being captured, and recording readership on the basis of subscriptions does not provide conclusive proof that the journals are read by every subscriber.

Combining those two scores to produce an average third ranking just exacerbates the issue. In addition, establishing the rank of a journal on the basis of the number of researchers that want to be published in it completely circumvents any measurement of the extent to which the content of the journal is actually used.

Accurately Measuring Usage

Project COUNTER now captures download and full text requests with the assumption that full-text requests infer actual usage as opposed to skimming an abstract from a downloaded article that is subsequently discarded as not being relevant to the topic being researched. However, critics argue that even this metric isn’t truly reflective of real usage.

Article-level metrics (altmetrics) are now experimenting with social data to see if forum posts and Twitter comments can be leveraged to support evidence of usage, but there are unanswered issues about privacy that still need to be considered.

A Need for Clear Parameters

The search for an improved journal-ranking model seems to be driven by the availability of new technology rather than a clearer vision of who would benefit from that model. It’s clear that there are multiple audiences for this ranking score, which implies that a one-stop solution may not be achievable. Selecting a journal for the prestige it might bring if your submitted article is accepted represents a dramatically different objective than picking the journal for content that might be relevant to your current research topic. Combining such disparate objectives may require different ranking models.

So, have we found that elusive best way to rank academic journals? Clearly, there is a need for a model that doesn’t require so many assumptions.

Rate this article

Rating*

Your email address will not be published.

You might also like
X

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • Q&A Forum
  • 10+ eBooks
  • 10+ Checklists
  • Research Guides
[contact-form-7 id="40123" title="Global popup two"]





    Researchers' Poll

    What should universities' stance be on AI tools in research and academic writing?