Skip to Main Content

Measure for Measure: Altmetrics (June 2017): “The Number That’s Devouring Science”

By Beth Juhl

“The Number That’s Devouring Science”

“The Number That’s Devouring Science”1

The search for alternative metrics can be traced to the growing discontent with the use and misuse of the Journal Impact Factor developed by Eugene Garfield2—a measure that was itself designed to filter the proliferating number of scientific publications for quality and content relevant to scientific researchers.3 Garfield has reiterated in numerous interviews and articles that the Journal Impact Factor, which measures the number of articles cited in individual journals, should never be used to assess the relative importance of particular articles, the stature of specific scholars, or the research rankings of organizations (departments, labs, or other research institutions).4 Yet to some, numbers seem so scientific, unbiased, and straightforward that many researchers and their host of assessors cannot seem to resist their allure. The Journal Impact Factor and its trademarked companion measures—the Immediacy Index® and the Cited Half-Life®—are central to the Journal Citation Reports (JCR) and Web of Science citation-index databases published for many years by Thomson Reuters, whose intellectual property division is now, since October 2016, a separate company, Clarivate Analytics. The citation indexes and measures developed by the predecessor Institute for Scientific Information (ISI), founded in 1960 by Garfield, were acquired by Thomson in 1992 to become Thomson ISI.5 This toolkit continues to form the basis for librarians’ collection decisions, as it has informed the evaluation of faculty tenure and promotion dossiers and is still misguidedly consulted by many to compare journals from different disciplines, disregarding different citation and publication patterns.

New tools and methods to analyze citation data emerged throughout the 2000s. In 2004, Elsevier launched a new database, Scopus, poised to be a direct competitor to Web of Science. Google Scholar also appeared on the scene in 2004. All three services offered large, cross-disciplinary article search engines. At the same time, each could also be mined for citation analysis at the author, article, journal, subject area, and (in the case of Web of Science and Scopus), the institutional level. Physicist Jorge E. Hirsch (Univ. of California San Diego) proposed the h-index in 2005 as a measure of the total number of papers published by a scholar and the times those papers have been cited by others.6 The h-index can be applied to entire journals as well as authors, but like the Journal Impact Factor, the standard h-index score should not be compared across incommensurable disciplines with different citation patterns. Nascent cross-disciplinary indexes Google Scholar and Elsevier’s Scopus quickly adopted the h-index, as did Web of Science. Later in the decade, bioinformatics professors Carl Bergstrom and Jevin West (both, Univ. of Washington) proposed the Eigenfactor®, another journal-level metric based on examination of five years of Journal Citation Reports data. Their lab went on to produce the Article Influence® score to rank individual articles by frequency of citation. Both not-for-profit and commercial publishers explored new ways to help their authors and audiences visualize citation patterns and relative influence. Open-access publisher PLOS (known variously as PLoS or the Public Library of Science) introduced its Article-Level Metrics (ALMs) in 2009, not only tracking views on PLOS but also on the PubMed Central full-text platform (and the larger PubMed database). In 2015, the National Institutes of Health introduced a beta version of a new bibliometric tool, iCite. Leveraging PubMed article metadata, iCite presents citation reports for individual articles, authors, departments, labs, or institutions. Users can upload up to 1,000 PubMed IDs (PMIDs) to view data about citations to those articles from other publications indexed in PubMed. Even as this article went to press, Elsevier announced a new journal-level metric, CiteScore, based on Scopus database metadata covering 20,000-plus journals, offered as a more encompassing, freely available alternative to the Journal Impact Factor. The CiteScore suite of metrics covers all articles and communications in a journal, not just the research articles analyzed by the Journal Citation Reports.“Simple metrics tend to count what is easily counted, such as articles and citations in established journals, rather than what is most valuable or enduring.”7

However innovative and revealing these emerging journal and article metrics may have been, many felt they fell frustratingly short in presenting a full picture of scholarly influence and impact. First, the scientific peer-review and publication process remained cumbersome, inefficient, and slow, and it was not adequate in keeping up with the increasing pace of research. Second, citation-analysis tools for monographs were inadequate or nonexistent, even with the introduction of Book Citation Index to the Web of Science suite of databases, largely leaving humanities scholarship out of the frame. Finally, the extant citation tools did not address the developing body of published items preserved on the web, or explain how research publications might make (and be shown to have made) an impact outside of academe. Even in journal-based disciplines, the growth of web forums changed the way scholars responded to research, offering faster, more interactive means to engage with authors’ recent publications and ideas. Traditional bibliometrics needed to expand to encompass webometrics,8 measuring the sharing and linking of research in a networked environment. As every academic field began to adopt social media for scholarly communication, tweets and likes joined the mountain of saves, shares, links, and downloads that might be mined as indicators of influence and impact.

In October 2010, Jason Priem (then a graduate student at the Univ. of North Carolina-Chapel Hill, now head of the firm Impactstory, which he cofounded with Heather Piwowar) and his colleagues posted “Altmetrics: A Manifesto.” They appropriated a Twitter hashtag for its name and claimed the domain name altmetrics.org (although the URL today is mostly a placeholder for the essay). The neologism coined by Priem (with Dario Taraborelli, Paul Groth, and Cameron Neylon) served as both a summary of the widespread restiveness with traditional bibliometrics and as a call to action to build better tools. Like Garfield’s 1955 proposal for citation indexes (as a means to “eliminate the uncritical citation of fraudulent, incomplete, or obsolete data,” with the goal “to establish the history of an idea”),9 altmetrics is all about filters, albeit different ones. The manifestants characterized traditional filters such as citation counting, the Journal Impact Factor, and the peer-review process as dated, narrow, and threatened with becoming “swamped,” and they sought new ways to “reflect and transmit scholarly impact” while sifting through the exploding volume of academic literature.10 Rejecting journal-level measures and proprietary data sources in favor of article-based metrics drawn from relatively open data sets such as citation- and document-management platforms Mendeley and Zotero, the altmetrics proponents also sought methods to evaluate emerging formats leading up to publication, including raw research data, blogs, comments, and other informal—but more immediate—communications. Although Priem and his colleagues did not propose what form these new tools would take, the cogency of their rallying cry could be seen in the citation counts for their manifesto on Web of Science and Google Scholar Citations, each of which trace only a trickle of formal citations to the piece before 2013. Meanwhile, the Twitter archive reflects the immediate flurry of discussion, linking, and sharing of the altmetrics manifesto. Traditional metrics were too slow to relay the story of this new idea.

The altmetrics proposition emerged at a time when the open-access movement had moved past its own manifestos and toward implementation. The two initiatives are intertwined and in many ways mutually beneficial, as open, freely accessible articles may be more widely shared and discussed, leading to more appearances in the venues measured by altmetrics.11 New measures logically would allow various scholarly societies, publishers, and funders to show the reach, immediacy, and impact of their new publication models. Launched at the Annual Meeting of the American Society for Cell Biology in 2012, the San Francisco Declaration on Research Assessment (DORA) outlines the limitations of the Journal Impact Factor, and its authors have called for shifting away from journal-level to article-level metrics, as well as examining other plausible output measures such as research data sets released, the number and quality of students trained, and funding awarded. As documented in the DORA News Archives, the declaration also demanded open access to citation data for external analysis. That same year, the Public Library of Science—the open-access journal publisher of PLOS Biology, PLOS ONE, and other similarly titled PLOS journals—launched a specialized PLOS Collections: Altmetrics website with articles drawn from the site’s archives and blogs. Soon after, the National Information Standards Organization (or NISO) received a grant in 2013 from the Alfred P. Sloan Foundation to study the promise of altmetrics, as described in NISO Alternative Assessment Metrics (Altmetrics) Initiative, and released in 2016 their recommended practices in Outputs of the NISO Alternative Assessment Project. The validity and scalability of altmetrics are still hotly contested, questioning what to measure, how to measure it, and how to address problems of bias toward more recent publications, and countering the perhaps inevitable claims of emphasis on narcissistic utterances or glib ephemera. But it appears for now that alternative metrics have found a place at the scholarly communications table, as evidenced by their adoption by publishers, funders, and universities.


1. Monastersky, “The Number That’s Devouring Science.”

2. Garfield, “The Impact Factor.”

3. Garfield, “The History and Meaning of the Journal Impact Factor.”

4. Beyond Bibliometrics, 115–121.

5. For a comprehensive history, see De Bellis, Bibliometrics and Citation Analysis.

6. Hirsch, “An Index to Quantify an Individual’s Scientific Research Output,” 16569.

7. Borgman, Scholarship in the Digital Age, 239.

8. Björneborn and Ingwersen, “Toward a Basic Framework for Webometrics.”

9. Garfield, “Citation Indexes for Science,” 108, 110.

10. Priem, et al., “Altmetrics: A Manifesto.”

11. Mounce, “Open Access and Altmetrics.”