1

Is the Relative Citation Ratio a Better Metric to Evaluate Scientific Papers?

Academic researchers present their discoveries to the scientific community by publishing papers in academic journals. In order to be able to do the work that contributes to these publications, scientists live in the highly competitive world of seeking funding support. A big part of how funding gets allocated depends on the quality of a researchers work. Grant review panels have to make difficult predictions about the likelihood of future scientific success. Recently, for government grants and federal grants, the review committees are increasingly turning to numerical approaches to assess scientific work. For example, a simple approach is to count the number of first or corresponding author publications. In addition, decision makers also rely on the impact factor of the journals in which the publications appear, and another approach involves computing the h-index. The impact factor of an academic journal is a measure that is determined yearly and is based on the average number of citations that journal receives. It is often associated with the relative importance of a journal within its field. The h-index is not an assessment of the journal but is a metric that assesses the author’s personal productivity and citation impact. The index is calculated based on a set of the scientist’s most cited papers and the number of citations that they have received in other publications. In general, the journals that have higher impact factors are often considered as more important than those with lower impact factors. Although all these metrics are widely adopted, many in the scientific community feel that these metrics are inadequate.

Firstly, just by counting the first or corresponding author publications, the quantity versus the quality becomes an issue. Secondly, for the impact factor assessment, even though it has a large effect on funding and hiring decisions, it masks large differences in the influence of individual papers, and since scientists in different fields have differential access to high-profile publication venues, impact factor has a narrow use for multidisciplinary analyses. Lastly, even though the h-index tries to put a value on the individual it actually puts early-career investigators at a disadvantage. Thus, there is a clear need for alternative methods that would serve to effectively normalize administrative decisions of assessing scientific work when sorting through large pools of qualified candidates.

Introduction to Relative Citation Ratio (RCR)

In the biomedical sciences alone, more than one million new reports are generated each year. This enormous volume of information and the increasing specialization of many scientists, has contributed to the need for new performance metrics in order to evaluate a researchers contribution to the field. Recently an improved method to quantify the influence of a research article was described. In the United States, a group at the National Institutes of Health (NIH) developed the Relative Citation Ratio (RCR). This new metric makes use of a co-citation network, which means that when it assesses one paper it looks at the other papers that appear alongside it in the reference list. By doing this, it field-normalizes the number of times an article is cited. Basically, when an author chooses to cite another author this gives the other author’s work relevance and that is part of the RCR that can provide valuable supplemental information for funding agencies. In a sense, the RCR is a field-normalized metric that shows the citation impact of one or more articles relative to the average NIH-funded paper.

The RCR does this assessment by dividing the actual citation count of a paper by an expected citation count, which provides an observed-over-expected ratio that is more comfortable to interpret. The RCR is presented as a decimal number. A value of “1” indicates that it’s performing as expected. When the value is greater than 1 the ratio is a better ratio, which means the article has received more citations than its peers for that year and subject area. As mentioned above, by taking into account all of the articles that are cited with the article, the RCR provides a much more focused view of the subject area of the article as the average obtained suggests a more distinct blend of the citation behavior.

How does RCR measure up?

A group in Germany called UberResearch is testing RCR with their partners, including publishers, research funders, and academic research institutions. Thus far, the general assessment is that RCR is a huge step forward in comparing a publication’s citation rates across disciplines. Based on the results from UberResearch, Digital Science is building RCR into many products. For example, RCR values now appear in the ReadCube viewer and the Dimensions database. Ultimately, having a common value to compare between publications is a great advantage to the workflow.

The team at the NIH working on RCR is continuing to develop and improve it. They are seeking ideas and opinions to make additional enhancements to the calculation and approach of RCR. It seems that more and more publishers, funders, and academic partners are incorporating RCR into their own evaluative processes and workflows. This will provide more information on its usefulness and how to improve it in the future.

Right now there is no perfect metric for evaluating science, but what really distinguishes RCR is the mixture of its significant improvements in its approach, cross-disciplinary application, and rapid market adoption. As RCR continues to improve it might prove to be the most useful metric yet.

Rate this article

Rating*

Your email address will not be published.

You might also like
X

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • Q&A Forum
  • 10+ eBooks
  • 10+ Checklists
  • Research Guides
[contact-form-7 id="40123" title="Global popup two"]





    Researchers' Poll

    What should universities' stance be on AI tools in research and academic writing?