Skip to Main Content

Research Impact: Home

Introduction

Research metrics are quantitative measurements designed to evaluate research outputs and their impacts. Metrics include a variety of measures and statistical methods for assessing the quality and broader impact of scientific and scholarly research. Traditionally, the research impact of a researcher is measured by the number of publications he or she has published in peer-reviewed journals, the citation counts his or her papers have received, and the researcher’s H-index. These metrics are usually gathered from citation databases such as Web of Science, Scopus and Google Scholar. Below is a list of common traditional research impact indicators used in citation databases.

Research Metrics - Article Level

Article-level metrics measure the usage and impact of published research at the article level. Researchers look for citing articles to trace the evolution of research and ideas. Conventional article-level metrics can include citation counts as well as newer metrics like Category Normalized Citation Impact (CNCI), Field Weighted Citation Impact (FWCI) and Relative Citation Ratio (RCR).

Citation metrics count and analyse the number of times other researchers refer to (cite) a given publication. This can be a useful measure of the level of attention and impact within scholarly publishing. Citation metrics can be generated on article, author, publication or institutional level, and some are normalised for discipline, document type and "age" of paper (i.e. number of years since published). Citation metrics can be obtained from databases/sources such as Web of Science, Scopus and Google Scholar. Self-citations typically refer to the number of times an author has cited him/herself in other papers that he/she has authored. Some citation databases, like Scopus, have the functionality to differentiate between citation counts that include or exclude such self-citations.

The Category Normalized Citation Impact (CNCI) is a metric by Web of Science and InCites. It is calculated by dividing an actual citation count by an expected citation rate for documents with the same document type, publication year, and research area. The CNCI of a set of documents is the average of the CNCI values for all the documents in the set. An CNCI value of 1 represents performance at par with world average. Values above 1 are considered above world average, whereas values below 1 are considered below world average. CNCI is useful for facilitating the benchmarking of citation performance across groups of different size, disciplinary scope and age, such as research institutions or groups, or geographic regions.

Field Weighted Citation Impact (FWCI) from Scopus and SciVal is an indicator of impact normalised for field/discipline, publication year and document type. FWCI indicates how the number of citations received by a set of publications compares to the average or expected number of citations received by other similar publications. A FWCI of 1 means that the set of publications received the same number of citations as the global average of similar publications. A FWCI above 1 is considered above world average, whereas a FWCI below 1 is considered below world average. FWCI is useful for facilitating the benchmarking of citation performance across groups of different size, disciplinary scope and age, such as research institutions or groups, or geographic regions.

The Relative Citation Ratio (RCR) is a citation-based measure of scientific influence of a publication based on the NIH iCite database. It is calculated as the citations of a paper, normalized to the citations received by NIH-funded publications in the same research area and publication year. The area of research is defined by the corpus of publications co-cited with the article of interest (i.e. the “co-citation network”). The RCR is calculated for publications indexed by PubMed and are at least 2 years old. A RCR value of 1 means that the set of publications have received the same number of citations as would be expected based on the NIH-norm. This metric has so far only been applied to the sciences, as it uses NIH-funded research as its basis of comparison. Moreover, the benchmarking stage of the RCR calculation provides a potential mechanism to enhance its utility in other areas of research.

Hutchins, B. I., Yuan, X., Anderson, J. M., & Santangelo, G. M. (2016). Relative Citation Ratio (RCR): A new metric that uses citation rates to measure influence at the article level. PLoS biology, 14(9), e1002541.

Research Metrics - Author Level

Author-level metrics measure the impact of the research output of an individual researcher. Author-level metrics are designed to help researchers analyze the cumulative impact of works across their publication history, allowing them to have a more holistic idea of the impact by including a wide range of journals. Author-level metrics can be derived from article-level metrics - aggregating or summarizing the impact of an author's publications.

One main indicator of a researcher’s output is the number of publications he/she has published, which is used to reflect his/her research productivity. Publication normally includes peer reviewed and officially published journal articles, conference proceeding papers, books and book chapters. Depending on research disciplines, other forms of publications, e.g. technical reports, working papers, commentaries, textbooks might also be accepted as academic publications.

Citation metrics count and analyse the number of times other researchers refer to (cite) a given publication. This can be a useful measure of the level of attention and impact within scholarly publishing. Citation metrics can be generated on article, author, publication or institutional level, and some are normalised for discipline, document type and "age" of paper (i.e. number of years since published). Citation metrics can be obtained from databases/sources such as Web of Science, Scopus and Google Scholar. Self-citations typically refer to the number of times an author has cited him/herself in other papers that he/she has authored. Some citation databases, like Scopus, have the functionality to differentiate between citation counts that include or exclude such self-citations.

H-index is an author-level metric that attempts to measure both the productivity and impact of a researcher. It is often used as a yardstick to gauge the impact of an individual researcher’s publications. Introduced by Hirsch (2005) to quantify a researcher’s research output, h-index is “defined as the number of papers with citation number ≥h”.1 For example, a researcher with an h-index of 8 implies that he/she has 8 papers with at least 8 citation counts. H-index can be used to compare the productivity and citation impact of researchers with comparable years of academic experience and of similar/same field of research. However, it has limited value when used to compare researchers from different subject areas or disciplines, as different research fields have different citation and referencing behaviours. In addition, h-index will not be very useful when evaluating researchers of disciplines or subject areas where the research output is mainly books and other non-article outputs. For example, book and book chapters are typically not covered well in citation databases.

1Hirsch, J. E. (2005). An index to quantify an individual's scientific research output. Proceedings of the National academy of Sciences, 102(46), 16569-16572.

The i10-index was created by Google Scholar, it is the number of publications the researcher has authored that have at least 10 citations. It is another indicator to help gauge the productivity of a researcher.

The Category Normalized Citation Impact (CNCI) for a researcher is the average CNCI of all his/her publications. The mean CNCI for a set of publications can be strongly influenced by outliers, and therefore should be used with caution for small datasets, such as the outputs of an individual researcher.

The Field Weighted Citation Impact (FWCI) for a researcher is the average FWCI of all his/her publications. The mean FWCI for a set of publications can be strongly influenced by outliers, and therefore should be used with caution for small datasets, such as the outputs of an individual researcher.

Research Metrics - Journal Level

Journal-level metrics measure the impact or prestige of a journal based on the number of publications and citations the publications received in the journal. Journal-level metrics are designed to measure the aggregate impact of publications, and they can give a sense of which journals are popular and/or respected within a specific research field within academia.

Journal Impact Factor (JIF) is a commonly used metric to evaluate journals by Journal Citation Reports (JCR, Clarivate), which is based on the Web of Science database. JIF is calculated by dividing the number of citations in a given year to the source articles published in that journal in the previous two years. JIF provides a functional approximation of the mean citation rate per citable item. For example, if Journal A has an impact factor of 5, then on average, the articles published in Journal A within the last 2 years have been cited 5 times. JIF is therefore used as a measure of the relative importance of a journal within its field. However, JIF does not take into account citation behaviour across disciplines, and is not a measure of the quality of the articles published within.

CiteScore was introduced by Elsevier in 2016 as an alternative to the Journal Impact Factor by JCR (Clarivate). It is essentially the average citations per document that a title receives over a four-year period. For instance, to calculate the value for the year 2022, CiteScore counts the citations received in 2019-2022 to documents published in the same time period, and divide that by the number of indexed documents published in 2019-2022 in Scopus. A CiteScore value is available for most active serial titles in Scopus ― journals, book series, conference proceedings and trade journals. Like the Journal Impact Factor, CiteScore is commonly used as a measure of the relative importance of a journal within its field, but also has its own limitations and should be used as one of many factors when evaluating a journal.

Starting from 2021 JCR release, Journal Citation Indicator (JCI) is calculated for all journals indexed by Web of Science Core Collection. The value represents the average category-normalized citation impact for papers published in the prior three-year period. For instance, the 2022 Journal Citation Indicator will be calculated for journals that published citable items (i.e. research papers classified as articles or reviews in the Web of Science) in 2019, 2020 and 2021, counting all citations they received from any document indexed between 2019 and 2022.

Source Normalized Impact per Paper (SNIP) measures the contextual citation impact of a journal by weighting the citations based on the total number of citations in a discipline. This method normalizes for differences in citation practices between disciplines, so that a single citation is given greater value where citations are less frequent in that field. It is calculated as the number of citations given in the present year to publications in the past three years divided by the total number of publications in the past three years. Click here to find a title’s SNIP calculation via Scopus. To find a list of journal titles’ SNIP calculation via Leiden University’s check out CWTS Journal Indicators

Developed by the Scimago Lab, SCImago Journal Rank (SJR) is a measure of the prestige of scholarly journals that accounts for both the number of citations received by a journal and the prestige of the journals where the citations come from. SJR represents the average number of weighted citations received during a selected year per document published in that journal during the previous three years, as indexed by Scopus. Incoming citations to a journal from prestigious journals will be given more weightage that those from less prestigious journals, by using an algorithm similar to PageRank. The prestige value depends on the field, quality and reputation of the source journals that citing article is published in. Scimago uses the Scopus journal classification scheme to rank journals by quartiles across subject areas. Rankings are discipline-normalised to account for differences in citation behaviour between disciplines.

Responsible Metrics

Use Cases of Research Metrics (bibliometrics)

  • Measure research impact 

Research metrics (bibliometrics) can help to illustrate the impact of a scholarly publication or group of publications in the greater research community. When researchers need to provide supporting evidence for research grants and academic promotions, or benchmark & assess research performance against their peers, they can make use of author-level metrics.

  • Identify where to publish

Journal-level metrics like Journal Impact Factor and CiteScore can help to identify journals which receive more citations and attention within academia. Whilst publishing in a highly-cited journal doesn’t guarantee that a paper will be well-read or well-cited, it may help to raise the profile of the work and boost the CV.

  • Seek research collaboration

Scientific research is becoming an increasingly collaborative endeavour. Although the nature and magnitude of collaboration vary among different disciplines, research metrics (bibliometrics) can offer a convenient and non-reactive tool for studying collaboration in research. For instance, researchers could use author level metrics and article level metrics to identify the most prominent researchers and publications in a field, and then look for further collaborations.