"We want to make [Impact Factor] so tacky that people will be embarrassed just to mention it,” stated Stefano Bertuz (chief executive of the American Society for Microbiology) in a recent Nature news article (1). Last month, I focused on publication quality and the potential for fraud in the open access model (2) – clearly unwelcome. But the use and misuse of Impact Factors arguably has a greater negative... erm... impact on science.
Impact Factor – the origin of which dates back to 1955 (3)(4) – essentially takes the number of citations to articles published by a journal in a two-year period (let's say 2013 and 2014) and divides it by the total number of articles published by that journal in those two years; the result is 2015's Impact Factor – or the average citation rate for the previous two years. In 2015, Analytical Chemistry is 5.886 and Trends in Analytical Chemistry is 7.487. But does it make sense to assign a simple number, such as Impact Factor, to something as multifaceted as “quality”? And, if so, how should it be used? Some have been clear on the answer to the latter question: not at all. Three days after Bertuz was quoted in the Nature article (itself a response to a preprint on bioRxiv (5)), ASM announced that it would stop supporting (and promoting) Impact Factor (6) – “to avoid contributing to a distorted value system that inappropriately emphasizes high IFs.” Apparently, ASM hope that other high-profile journals will follow suit. There are a couple of problems with (Journal) Impact Factors, including the fact that the number is very often skewed by a small number of very highly cited papers (the main argument of the bioRxiv paper). But perhaps people’s (mis)perception of the metric is the more damaging aspect; an author’s ability to publish in a “high-Impact Factor” journal can positively influence promotion and funding decisions (in cases where hirers and funders are too lazy to delve into specific metrics). Using Journal Impact Factor as a surrogate for individual research (or researcher) quality is clearly flawed.
The bioRxiv paper – “A simple proposal for the publication of journal citation distributions” – could be a new catalyst for change, or at least discussion, and should not be taken lightly; its authors represent Nature Research, Science (AAAS), and PLOS amongst others. Together, they suggest that greater transparency is needed. Given an inherent focus on representative sampling, quantitative data, and robust statistics, shouldn’t analytical scientists be leading the charge for change?
Rich Whitworth
Editor

References
- http://www.nature.com/news/beat-it-impact-factor-publishing-elite-turns-against-controversial-metric-1.20224#/b1 E Garfield, “Citation indexes to science: a new dimension in documentation through association of ideas”, Science. 122:108-111 (1955). http://garfield.library.upenn.edu/essays/v6p468y1983.pdf E Garfield, “The history and meaning of the Journal Impact Factor”, JAMA, 295(1), 90-93 (2006). DOI:10.1001/jama.295.1.90. https://theanalyticalscientist.com/issues/0616/open-access-fraud/ http://biorxiv.org/content/early/2016/07/05/062109.article-info https://www.asm.org/index.php/asm-newsroom2/press-releases/94299-asm-media-advisory-asm-no-longer-supports-impact-factors-for-its-journals