1

Tracking Usage Factor — by COUNTER Usage Factors or Journal Impact Factors?

For many frustrated researchers, the academic publishing industry continues to move at a glacial pace. The Institute for Scientific Information (ISI) Journal Impact Factor (JIF) has been in use for over four decades, using methodology that dates back to the 1920s and that has been updated in very piecemeal fashion ever since.

Librarians and bibliometricians continue to debate the validity of tracking citations as a measure of importance, and raise the following concerns about the Journal Impact Factor:

  • Journals count citations on a Bradford distribution (since it is impossible to track every citation in every journal), but then takes an arithmetic mean of that numbers to generate a statistically flawed metric.
  • Since the citations are of the articles rather than the journals in which they are published, the relative importance of the journal based on the number of articles cited is skewed at best.
  • Journals figured out very quickly that there are ways to manipulate the ranking such as by limiting the number of low citation topics or increasing the number of high citation topics such as review articles. Others have taken a more aggressive approach, now called coercive citation, where authors of submitted articles are “encouraged” to cite articles from sister journals as an unofficial requirement for publication.

Accepted with Flaws

Thus, JIF seems to have become the commonly accepted metric of quality journal scholarship solely on the basis of an apparent absence of a viable competitor. The Journal Impact Factor receives broad criticism but seems to persist on the premise of “it’s better than nothing.” The fact that the process seems to weigh heavily in favor of the journals being ranked appears to suit the publishing houses that are quite happy to ignore the frustrations of the authors they publish and the librarians that are required to provide solid justifications for the budget dollars they spend on journal subscription fees.

Project “COUNTER”

In 2002, an alternative approach to tracking usage was introduced in the form of Project COUNTER (Counting Online Usage of Networked Electronic Resources).

Developed as an international initiative funded by membership fees from “counter-compliant” member journals and sponsorship deals, the project focuses on full text article requests as a more viable measure of actual usage of an article or paper, as opposed to the questionable practices of citation tracking.

Incorporating the same objective oversight that accounting companies use in auditing a company’s financial reports, COUNTER members agree to the provision of usage reports in COUNTER-specified formats, and to have those reports audited on a regular schedule to authenticate the data being reported. The COUNTER codes of practice are updated regularly to keep up with technological advancements—especially the ability for students and faculty to now access material through tablets and smart phones.

However, both JIF and COUNTER share weaknesses in their core metric. System abuses notwithstanding, citation indexes are not a true measure of usage. How many researchers have included citations that are considered to be “classics” in their topic in the hope of ensuring that other, younger researchers will read them?

In turn, how many students have endeavored to embellish their references by adding citations they never actually looked at? Project COUNTER has a similar problem. A user request only proves that someone requested a research article or paper. It offers no proof as to what he or she did with it. Was it read and implemented into a research study or academic paper? Or was it quickly skimmed and discarded as being of no use? Should it matter if we’re not counting whether or not it was useful as opposed to just used?

A New World of Altmetrics

Website FAQ databases will now ask you to rate if an answer was helpful or not before you exit, but sites like Amazon have to pester you with reminder emails to ask you to leave a review on your last purchase. Perhaps libraries can come up with a compromise solution for their databases?

In the meantime, researchers are turning to article-level metrics (altmetrics) and casting a much wider net for any and all available information on an article or research paper.

Using a quasi-crowdsourcing approach, altmetrics methods sweep for as much social media data as is available to develop a more detailed profile on both who is using and who is getting value from a published academic document. We’re still in the early stages of acceptance of the veracity of this data, but we are already seeing large discrepancies between measures of usage and measures of value. This could have a significant impact on the long-held prestige of journal rankings.

Rate this article

Rating*

Your email address will not be published.

You might also like
X

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • Q&A Forum
  • 10+ eBooks
  • 10+ Checklists
  • Research Guides
[contact-form-7 id="40123" title="Global popup two"]





    Researchers' Poll

    What should universities' stance be on AI tools in research and academic writing?