Scientists know that their research carries great value for society. However, the impact is not only a reflection of the nature of the discovery itself; but rather, research makes its greatest impact when it is widely shared. The impact of research publications is an important metric of industrial and academic research performance.
Sharing research helps scientists incorporate new findings and important results into their own work. It also helps policy makers make better decisions. However, assessing the impact of a particular research on society needs more than just knowing where it is published and how widely it is talked about. Therefore, responsible use of research metrics has become important within the research community.
Measuring the impact of publications using research metrics carries some risk. For example, download statistics can be easily manipulated. Therefore, the research community has developed the following criteria for responsible metrics:
- Robustness: Best possible data in terms of accuracy and scope
- Humility: Quantitative evaluation should support qualitative evaluation
- Transparency: Metrics results can be tested and verified
- Diversity: Allow for differences in various fields
- Reflexivity: Constantly updated in response to any changes in indicators
The use of research metrics now is highly experimental and may cause more harm than good. Until measures that are more robust are developed, researchers will have to use the current reviewing mechanisms. So, what does this mean for a university scientist or a researcher? That question can be answered by first considering what forms of metric data are available.
Not all Metric Data is the Same
Among the myriad of citation data formats, some are not as helpful as one might think. For example, Google Scholar, Altmetrics, and download indicators are all spammable. Additionally, conference papers may not be useable as they are not peer reviewed or widely accessible. Including the conference papers in citation data will depend on whether they form an important part of the respective field of research. In comparison, field-normalized indicators are more valuable. These indicators have been developed such that a value of 1 indicates the average citation impact of an article for a specific subject and year and a value greater than 1 indicates above average impact. One such indicator, the mean normalized citation score (MNCS), divides the citation count for an article by the mean citation count of all articles from the same field and year. An even better indicator is the mean normalized log-transformed citation score (MNLCS) that allows for log-normalization of citation data that is often highly skewed.
So what does this mean for researchers? Industry and government bodies are looking to incorporate such metrics into their decisions.
How Metrics Influence Funding and Policy
The Forum for Responsible Research Metrics is an organization of research funders, sector bodies, and infrastructure experts. It represents a partnership between various research councils and university research bodies and its purpose is to develop metrics that can be used in research decisions or determining the impact of a researcher’s work. Though these research metrics are still experimental, once developed, they can greatly affect how governments or industries invest in science and how funding agencies grade an investigator’s productivity or environment.
Importance for Researchers
The use of these research metrics requires further refinement. Researchers, universities, and other relevant decision makers are still skeptical about the use of metrics in managing or funding a research. Ultimately, metrics will likely be used only to support “expert judgment, quantitative indicators, and qualitative measures” so as not to discourage diversity in research.
Please register to post a comment.