Distinction or Popularity?
As researchers toil away at building a career, they are constantly presented with requirements to establish and justify their expertise in their respective fields. Grant Committees, Search Committees, and Tenure Committees, are all looking for differentiators in order to decide who gets the research money, the job, or the rank.
In such a competitive “publish or perish” environment, you are only as good as your last research paper, and perception is reality. This constant struggle to establish your scholarly value to existing and future employers has led researchers to seek alternative metrics, but many of those alternatives are being dismissed as popularity contests rather than robust indicators of quality scholarship.
“A Very Blunt Instrument”
Distinction has traditionally been achieved by demonstrating the relative impact of your research output—the prestige of the journal that published the paper, and how many times it has been cited since it was published. The prestige of the journal has, for the last 60 years since Eugene Garfield introduced the metric, been measured on an impact factor that correlates the highest rate of influence with the highest number of citations.
While the impact factor methodology isn’t that different to Google’s first attempt at ranking websites, critics such as Peter Binfield of the Public Library of Science, regard it as: “a very blunt instrument.” It can be of some use in choosing where to spend library budget dollars on journal databases, but falls short in assessing individual research papers.
Tracking citations as a measure of research quality (bibliometrics) was criticized even before it was adopted as a standardized metric. For researchers to align the valuation of their academic scholarship to the perceived quality of a journal over which they have no control other than the decision to submit a paper for publication made no logical sense, but the citation was the only proof they had that the paper was being read. The h-index has offered a more personalized metric but it still remains dependent on journals and citations.
Very recently, researchers have been exploring ways to leverage technology’s ability to process large volumes of data to identify more information at the article level. Terms such as cybermetrics, webometrics, and the more user-friendly altmetrics (as an abbreviation of “article-level metrics” or “alternative metrics,” depending on your preference) have arrived in common parlance to label methodologies to capture all the evaluative data that is now available on a research paper. Journal reputations and numbers of citations have now been superseded by online views, bookmarks, blog posts, tweets, and shares as being more indicative of real interest in a research paper.
Critics argue that views, tweets, and shares are easily manipulated. Many celebrities lost millions of followers in Twitter’s latest purge of fake accounts. Advocates, on the other hand, would argue that the value of real-time usage data far outweighs the risk of a few bogus accounts.
Perception Is Reality
The creativity of the research community in attempting to leverage the availability of all this data is to be commended. However, the value of such data will ultimately be perceived by the institutions and funding organizations that must accept the data as a credible measure of research quality. We are seeing rapid growth in the availability of aggregators who can compile this data and link it to individual articles but availability doesn’t automatically equate to utilization.