insider's guide
to the library

LEARN Navigation Menu

  Library workshop calendar    LibGuides    sqplay

  Savvy Researcher Twitter    University of Illinois Facebook    Library Flickr



More and more scholarly literature is published every year, so it can be a challenge to keep up with the developments in your field, much less the developments in other fields that might be of interest to you. Scholars have always used filters to choose what to read, perhaps preferring certain journals over others or taking the recommendations of colleagues. However, new ways of measuring scholarly output, called “altmetrics,” might provide better ways of picking out the most influential and important new scholarship. For more information on this topic, please see our Savvy Researcher workshop entitled "Understanding Impact" as well as its companion guide

What are altmetrics?

Altmetrics, or “alternative metrics,” are an emerging field of new methods for measuring the use and importance of scholarly articles, particularly in the sciences. As opposed to more traditional bibliometrics, such as Impact Factor, altmetrics provide article-level data and are based on new electronic sources of information, such as number of downloads and page views from a publisher, repository or online reference manager like Mendeley, or the amount of discussion generated in online venues such as Twitter or blogs.

What's Wrong with Traditional Metrics?

What’s wrong with traditional metrics such as Impact Factor?

Impact Factor (IF) has been important in assessing the scientific and technical literature ever since it was introduced in 1955. Working from the assumption that citations indicate influence, it measures the number of times that articles in a journal are cited by other journals over a two-year period, divided by the total number of citable items in the journal. It has been used to compare the importance of different journals when considering venues for publication, choosing which journals to read in a crowded publishing environment, or (controversially) as a proxy to judge the importance of an individual scholar’s work.

Although its use is commonplace in the sciences and is integrated into Thomson Reuters’ Web of Knowledge database, its limitations have been the topic of debate. Because it measures citations at the level of the journal, it cannot reliably be used to draw inferences about the impact of a particular article or author. (Critics point out that “typically only 15% of the papers in a journal account for half the total citations. Therefore only this minority of the articles has more than the average number of citations denoted by the journal impact factor.”)

Even when used for the purpose of ranking journals, it has faced criticism on several counts:

Article-level metrics 

Other methods exist for measuring the impact of an article, journal, or author using citations, including ways of measuring how often an article has been cited. However, the fact of citation is not necessarily proof of influence or approval. One report uses statistical methods to claim that 80% of sources in bibliographies are not actually being read by the people who cite them, and citation practices vary widely across disciplines. Other tools attempt to take these complications into account, such as:

Benefits and Drawbacks of Altmetrics

How do altmetrics attempt to compensate for the limitations of traditional metrics?

Altmetrics make use of the many kinds of data besides citations that can tell us about the importance of articles. Before the internet, the only quantifiable measure of an article or journal’s impact was the number of times it was cited. Today, there are a number of different data sources, from page views, downloads, and comments to the number of mentions on blogs or social media. The ability to combine different kinds of data from multiple sources promises to give a more fine-grained picture of an article’s influence, and to make that picture available much more quickly. Because of the slow pace of academic publishing, it can take months or years before an article begins to show up in journal citations in any number. The relative speed of altmetrics means that they can be useful guides to the current literature, and their association with open access and open source means that the data they generate can be personalized and repurposed.

What are the limitations of altmetrics?

Altmetrics are still in the experimental phase. There is not yet widespread agreement about how to choose, analyze, and combine sources of data to provide a reliable indicator of influence. Likewise, because they rely on new data sources, it is impossible to use them to make comparisons to the past. Finally, whereas traditional metrics relied on data pulled from the scholarly literature, altmetrics draw on sources like blogs and twitter whose importance is growing but whose role in scholarly communication is still changing and subject to debate.


According to its website, the “Altmetric score is a quantative measure of the quality and quantity of attention that a scholarly article has received.” It offers scores for nearly 400,000 articles from 8,000 journals based on the number of times the article is mentioned in social media.
Impact Story (formerly Total-Impact)
ImpactStory is a web-based application that aggregates metrics from a variety of sources, including Mendeley, PLoS, Scopus, and Wikipedia, related to your research and publications.
Plum Analytics
Plum is a for-profit company that offers analytics for 20 different kinds of “artifacts,” including journal articles, book chapters, datasets, presentations and source code. It aggregates data based on a variety of sources at a variety of different levels, including artifact, author, lab, department, and journal.
Public Library of Science (PLoS)
PLoS is a nonprofit publisher of scientific and medical literature, including the open access journal PLoS ONE. They provide article-level metrics based on usage, citations, comments and ratings, social networks, and coverage in blogs and other media.
Publish or Perish
A free, downloadable program created by Anne Wil Harzing that uses Google Scholar data to determine statistics for an author, including h-index, g-index, total publications, total citations, and average citations per publication or per year.
Allows you to quickly see article-level metrics for individual authors; organized using a baseball card metaphor.

 Further Reading

Howard J. (2012). Tracking scholarly influence beyond the impact factor. Chronicle of Higher Educationhttp://chronicle.com/blogs/wiredcampus/tracking-scholarly-influence-beyond-the-impact-factor/35565

Henning V., Gunn, W. (2012, September 6). Impact factor: Researchers should define the metrics that matter to them. Guardianhttp://www.guardian.co.uk/higher-education-network/blog/2012/sep/06/mendeley-altmetrics-open-access-publishing

Neylon C, Wu, S (2009) Article-Level Metrics and the Evolution of Scientific Impact. PLoS Biol 7(11), e1000242. doi:10.1371/journal.pbio.1000242.  http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1000242

Fersht, A. (2009, April 28). The most influential journals: Impact Factor and Eigenfactor. Proc Natl Acad Sci, 106 (17), 6883–6884. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2678438/

Haustein, S., Peters, I., Bar-Ilan, J., Priem, J., Shema, Terliesner, J. (2013, April 6). Coverage and adoption of altmetrics sources in the bibliometric community. ArXiv. http://arxiv.org/abs/1304.7300

McFedries, P. (2012, July 30). Measuring the impact of altmetrics. IEEE Spectrumhttp://spectrum.ieee.org/at-work/tech-careers/measuring-the-impact-of-altmetrics

Priem, J. Altmetrics: A Manifesto. http://altmetrics.org/manifesto

Priem, J., H. Piwowar, B. Hemminger. (2012, March 20). Altmetrics in the wild: using social media to explore scholarly impact. ArXiv.  http://arxiv.org/html/1203.4745v1

Roemer, R.C., R. Borchardt. (2012) From bibliometrics to altmetrics: a changing scholarly landscape. College and Research Libraries News, 73 (10), 596-600.

Seglen, P.O. (1997). Why the impact factor of journals should not be used for evaluating research. British medical Journal, 314 (7079), 498–502. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2126010/ 

Tananbaum, Greg. (2013, April 16). Article-level metrics: a SPARC primer. Scholarly Publishing and Academic Research Coalition.  http://www.sparc.arl.org/sites/default/files/sparc-alm-primer.pdf