Skip to Main Content
It looks like you're using Internet Explorer 11 or older. This website works best with modern browsers such as the latest versions of Chrome, Firefox, Safari, and Edge. If you continue with this browser, you may see unexpected results.

Research Impact: Home


Research metrics are quantitative measurements designed to evaluate research outputs and their impacts. Metrics include a variety of measures and statistical methods for assessing the quality and broader impact of scientific and scholarly research. Traditionally, the research impact of a researcher is measured by the number of publications he or she has published in peer-reviewed journals, the citation counts his or her papers have received, and the researcher’s H-index. These metrics are usually gathered from citation databases such as Web of Science, Scopus and Google Scholar. Below is a list of common traditional research impact indicators used in citation databases.

Traditional Research Metrics

One main indicator of a researcher’s output is the number of publications he/she has published, which is used to reflect his/her research productivity. Publication normally includes peer reviewed and officially published journal articles, conference proceeding papers, books and book chapters. Depending on research disciplines, other forms of publications, e.g. technical reports, working papers, commentaries, textbook might also be accepted as academic publications.

Citation metrics count and analyse the number of times other researchers refer to (cite) a given publication and can be a useful measure of the level of attention and impact within scholarly publishing. They can be generated on article, author, publication or institutional level, and some are normalised for discipline, type and age of paper. Citation metrics can be obtained from three sources - Web of Science, Scopus and Google Scholar. Self-citations counts typically refer to the number of times an author has cited him/herself in other papers that he/she has authored. Some citation databases like Scopus have functions in their databases to differentiate between citation counts that include and exclude such self-citations.

H-index is an author-level metric that attempts to measure both the productivity and impact of a researcher. It is often used as a yardstick to gauge the impact of an individual researcher’s publications. Introduced by Hirsch (2005) to quantify a researcher’s research output, h-index is “defined as the number of papers with citation number ≥h”.1 For example, a researcher with an h-index of 8 implies that he/she has 8 papers with at least 8 citation counts. H-index can be used to compare the productivity and citation impact of researchers with comparable years of academic experience and of similar/same field of research. However, it has limited value when used to compare researchers from different subject areas or disciplines as different research fields have different citation and referencing behaviours. In addition, h-index will not be very useful when evaluating researchers of disciplines or subject areas where the research output is mainly books and other non-article outputs. For example, book and book chapters are typically not covered well in citation databases.

1Hirsch, J. E. (2005). An index to quantify an individual's scientific research output. Proceedings of the National academy of Sciences, 102(46), 16569-16572.

Journal Impact Factor (JIF) is a common used metric to evaluate journals via Journal Citation Reports (JCR), which is based on the Web of Science database. It provides a functional approximation of the mean citation rate per citable item. If Journal A has an impact factor of 5, on average, the articles published in Journal A within the last 2 years have been cited 5 times. As such, a journal would aim to work towards as high an impact factor as they could. Even though impact factor has been commonly used to evaluate journals, it also has a few cons, for example, it does not account for positive or negative citation, it is bound to the contents of the Web of Science database only.

Source Normalized Impact per Paper (SNIP) measures the contextual citation impact of a journal by weighting the citations based on the total number of citations in a discipline. This method normalized for differences in citation practices between disciplines, so that a single citation is given greater value where citations are less frequent in that field. Click here to find a title’s SNIP calculation via Scopus. To find a list of journal titles’ SNIP calculation via Leiden University’s check out CWTS Journal Indicators

Measures the ratio of citations to citable items for a given journal over a given period of time. IPP is the most direct correlate to the Impact Factor, but it calculates this ration over three years rather than two and it includes only peer-reviewed scholarly papers in both the numerator and the denominator. IPP is the foundational metric for the SNIP; it was previously known as RIP (raw impact per publication). To find a list of journal titles’ IPP calculation via Leiden University’s CWTS Journal Indicators.

Responsible Metrics