Article Text

PDF

Editorial
Can you measure the impact of your research?
  1. Laura Feetham, BA, MSc
  1. Assistant Editor, Veterinary Record, BMA House, Tavistock Square, London WC1H 9JR
  1. e-mail: lfeetham{at}bmj.com

Statistics from Altmetric.com

THE number of research papers published each year is growing. A recent study by Bornmann and Mutz (2014) estimated that global research output has been increasing by between 8 and 9 per cent every year since the end of the Second World War. If that figure is accurate, it equates to a doubling of research output every nine years.

With so many findings each year, it is useful to be able to assess the impact of published research. Research funding is not infinite, and funders and institutions often need to make tough decisions about how to distribute money for projects. As part of their decision-making process, they are likely to take into account a range of factors, including the tangible impact that the research could have in the field.

When considering research impact, many people first think of the Journal Impact Factor (JIF). However, this was never designed as a tool to gauge the impact of individual pieces of research, but rather the impact of academic journals. It was first devised in the 1960s as a means to help research libraries differentiate between journals when deciding which ones to subscribe to.

A journal's impact factor is based on two elements: the numerator is the number of times articles published in the preceding two years have been cited and the denominator is the total number of articles and reviews published in the same two years. So it is a measure of how frequently articles published in that journal are cited, while controlling for the size of the journal's output.

There have been a number of criticisms levelled at the JIF. For example, as authors are more likely to cite their own previous work, papers with more authors are likely to receive more citations. As the average number of authors per paper varies greatly by academic discipline (sociological sciences tend to have one or two authors per paper while papers in the fundamental life sciences tend to have far more), this can cause impact factors to become skewed and means that JIFs are only comparable between journals in the same academic discipline (Amin and Mabe 2000). Other criticisms have related to the way that JIFs are calculated and which articles are included in either the denominator or the numerator, and that some papers may be cited multiple times for negative reasons and yet these negative citations would still have the effect of improving a journal's impact factor.

Despite being originally designed as a tool for measuring journal quality, JIFs began to be used to measure other things, namely the impact of individual research studies and even the merits of the researchers themselves.

From the 1980s onwards, the JIFs of the journals in which researchers published their findings gradually achieved primacy for institutions, funders and researchers themselves. More and more, impact factors were seen to play a role in grant and funding decisions, promotions and other aspects of academic careers. In an editorial in Science in 2013, Bruce Alberts described how he had been sent ‘curricula vitae in which a scientist annotates each of his or her publications with its journal impact factor listed to three significant decimal places’. In some countries, research published in journals with an impact factor of less than 5 is still classed as being of zero value by institutions, funders and governments (Alberts 2013).

Eventually, a backlash formed against what many saw as the systematic misuse of a metric which was statistically fallible. In December 2012, a group of concerned scientists gathered at a meeting of the American Society of Cell Biology. Five months later, the San Francisco Declaration on Research Assessment (DORA) was released (http://am.ascb.org/dora). The aim of DORA was to put an end to the practice of using JIFs to judge the merit of individual researchers. At the time of writing, the declaration had been signed by 12,000 people worldwide.

So what other tools are available to measure research impact?

In recent years, the prominence of the internet has affected the way that research is shared and distributed. Almost all research papers are now published online in one form or another, and many are shared and discussed online by academics and the public alike. One tool which takes online sharing into account is Altmetric, which tracks online activity around scholarly papers. It allows authors, publishers and readers to see which publications are receiving the most amount of attention on the internet and how they are being shared.

Veterinary Record, along with all journals published by BMJ, publishes an Altmetric widget alongside all of its online research articles (Fig 1). The widget shows how frequently the article has been reported in the press, shared on Twitter and Facebook, blogged about and more.

FIG 1:

Altmetrics for an article in the British Journal of Sports Medicine entitled ‘It is time to bust the myth of physical inactivity and obesity: you cannot outrun a bad diet’. The Altmetric widget appears alongside the online version of the article and shows how much the article has been discussed and shared online

A key aspect of Altmetric is that it gives a virtually instantaneous view of how research is being received. While citations may take years to accrue, a paper could be shared and read thousands of times in just a few hours.

But Altmetrics are by no means a perfect measure of research impact. Certain topics are more accessible to a lay audience simply because they are easier for non-academics to grasp. Furthermore, some articles may lend themselves more to sharing between non-academics. At the time of writing, the Altmetric Explorer (an analytical tool) showed that an article entitled ‘Pathology in the Hundred Acre Wood: a neurodevelopmental perspective on A. A. Milne’ (Shea and others 2000) had an Altmetric score of 3211, the paper with the 15th highest Altmetric score since data started being gathered in 2009. At the same time, the article confirming the discovery of the Higgs boson (Atlas Collaboration 2012) had a score of 644.

Despite this, Altmetric fills an important niche in terms of measuring impact. The effective and wide communication of research findings is becoming increasingly important to researchers and funders alike, and Altmetric can allow them to gauge how successfully their results have been disseminated.

Another measure of access to published research is the page view and download statistics for online papers. For Veterinary Record articles, these can be accessed by clicking the ‘Article Usage Statistics’ link next to every online article. This gives a breakdown of how many times the abstract and full text version has been viewed online and the number of PDF downloads. While these statistics can give an indication of how users have accessed the research, they don't take into account what the readers did with the findings afterwards, whether they used them to inform further research or, in the field of veterinary sciences for example, their own clinical practices.

None of the tools and techniques currently available for measuring research impact are exemplary. All have their drawbacks and give only an incomplete picture of what happens to research findings once they are published. An ideal measure of research impact would take into account not only how results are shared, cited and disseminated, but also the real-world effects of the findings, be they behavioural changes, lives saved or a better understanding of how the world works.

References

View Abstract

Request permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.