Skip to main content Skip to local navigation
Home » Research Metrics and Visibility » Limitations of bibliometrics

Limitations of bibliometrics

Bibliometrics such as h-index or times cited each provide one kind of insight into the reach and effect of work. However, because of how research is done and how scholarly publishing works, they are not necessarily accurate and thorough.

"Any metric can be gamed, especially singular metrics such as citation counts." --- Christine L. Borgman, Big Data, Little Data, No Data: Scholarship in the Networked World (2015)

Scope of the tool

Common tools such as Web of Science and Scopus provide a particular view of the bibliographic universe. This view may be limited by:

  • formats of materials included (e.g., journals, conference papers, books, book chapters);
  • subject breadth (what falls within the subject scope of the tool?);
  • subject depth (how much of a particular discipline's scholarly output is included?);
  • geographic coverage (do publications emanating from particular regions get included?);
  • language coverage; and
  • depth of backfiles (e.g., how far back are citations tracked?).

In all cases, it is important to ask what a particular tool includes and what it excludes. There is no one article database that includes all articles ever published and which can analyze all the relations between them. (Publishing open access is one way of fighting this, and York University Libraries can help.)

Disciplinary differences

Some disciplines, particularly in the humanities, rely more heavily on particular formats for scholarly output, especially books and book chapters. These are not tracked well in tools such as Web of Science and Scopus, nor is the subject breadth and depth for humanities and social science journals as broad and deep within these tools. Therefore scholars who write books may have their impact, as measured by these methods, misrepresented.

Publications in some disciplines do not rely as heavily on citations to other work. Thus, the reduced number of citations interconnecting publications may not properly capture the impact of a publication. Moreover, these metrics do not necessarily work well for creative works and may not reflect local cultural practices.

Researchers who publish in journals that serve subdisciplinary specialties not well-covered by the common tools will not be accurately tracked by the bibliometrics that common tools produce.

At the very least, disciplinary particularities do not allow for cross-disciplinary comparisons of impact.

Inherent limitations of the metrics

A scholar may produce few "units" of scholarly output (books, journals), yet this small body of work may be seminal for a particular field and yield tremendous impact on a discipline's scholarship. Standardized metrics, such as the h-index, will have difficulty accounting for such situations.

Large research teams in some disciplines may produce dozens or hundred of research papers, each with dozens or hundreds of authors. Members of these teams will often demonstrate very high impact metrics that may not accurately reflect their individual prominence within the field.

This quote, from bibliometrics researchers, says it well:

"Quantitative metrics are poor choices for assessing the research output of an individual scholar. Summing impact factors, counting citations, tallying an h-index, or looking at Eigenfactor™ Scores (described below)—none of these methods are adequate compared with what should be the gold standard: reading the scholar's publications and talking to experts about her work." --- Bergstrom, C. T., West, J. D., & Wiseman, M. A. (2008). The EigenfactorTM Metrics. The Journal of Neuroscience, 28(45), 11433–11434. http://doi.org/10.1523/JNEUROSCI.0003-08.2008.

Responsible use of metrics

The use of common bibliometric indicators as comparable measures in evaluations of research impact and quality has been repeatedly challenged. Organized efforts to describe and incentivize the responsible use of metrics have provided practical strategies and solutions to researchers, research administrators and other stakeholders that participate in research assessment efforts. (Morales et al., 2021)

For example, the Declaration on Research Assessment (DORA) aims to “improve the ways in which the outputs of scholarly research are evaluated” (DORA, 2012). The Declaration provides eighteen key strategies that aim to reduce dependence on the Journal Impact Factor, increase assessment of research based on merit, and take advantage of the flexibility of online publication. Many researchers across intuitions worldwide, including at York University, have signed the declaration.

The Leiden Manifesto for research metrics also demonstrates a commitment from bibliometric researchers to reduce dependency on certain metrics. Presented in 2015 at the Science, Technology, and Innovators Conference in Leiden, the manifesto offers ten key strategies to improve metrics-based research assessment.