A recent article published in Quantitative Science Studies (Antonoyiannakis, 2020) analyzing the effects of single papers on Journal Impact Factor (JIF) rankings, reveals that individual articles can significantly affect a journal’s IF, including increases upwards of 50%. At the highest level, the findings confirm — as long argued by stakeholders calling for Impact Factor alternatives — the JIF is not an entirely reliable means of assessing journals or the articles in them because outlier papers can skew it.
The implications of the findings also run deeper, magnifying the positive-feedback-loop journal ranking system that the JIF perpetuates. As the QSS article argues, because the “single-paper effect on the JIF is inversely proportional to journal size,” by remaining small and prioritizing publishing a few potentially groundbreaking papers, high-IF journals can maximize their chances of staying on top. In this way, the JIF creates an uneven playing field for journals and pegs scholars in a “publish or perish” race to come up with the most innovative findings to make it into few high-IF publications, which can intentionally or not lead to research spin.
As the shortfallings of the JIF become more apparent, the question is — will it ever be superseded? While there is yet to be a foreseeable JIF replacement, new research assessment measures are emerging with the potential to challenge the primacy of the JIF. And shifts in funder policies, such as the UK Research Excellence Framework’s attention to broader research impacts in recent years and Plan S’ principle that its funders will “value the intrinsic merit” of scholarly works, could signal broader changes to come in research assessment models. In this blog, we overview three emerging alternative research assessment options that journal publishers should watch and ways to implement them.
Altmetrics to demonstrate article-level impacts
An emerging trend to counterbalance JIF-based research assessment in recent years has been the rise of alternative article-level impact indicators or altmetrics. Altmetrics are metrics gathered from mentions of research in nontraditional online outlets that can encompass anything from article views to references in social media posts to links in public policy documents. It’s worth noting that these metrics are not meant to be markers of research quality, but rather to provide a window into the wider attention a piece of research has received to demonstrate its impacts beyond citations.
Now going on ten years since the term “altmetrics” was coined, altmetrics are no longer the new kids on the block, so to speak, but many still question whether altmetrics will become a key part of research assessment. Since 2013, funding bodies have gradually begun acknowledging altmetric data, most notably the UK Research Excellence Framework (REF). While use cases for altmetric data were still being ironed out during 2014 REF preparations, it appears that altmetrics may play a much bigger role in 2021 REF analysis, with more institutions like King’s College London using altmetrics to demonstrate early research impacts. Plan S’ Criteria for Transformative Journals, which states that Transformative Journals must regularly update authors on “the usage, citations, and online attention of their published articles” also suggests that cOAlition S could take altmetrics into consideration to some degree. Uptake of altmetrics among researchers also continues to expand as more institutions factor them into tenure and promotion decisions. At the same time, there has been more research into the implications of articles’ altmetric levels, including the potential of altmetrics as predictors or drivers of future citations in some fields.
There are various ways that journal publishers can support altmetrics collection and display, from including basic publishing analytics such as pageview and download counts on digital articles to using services like Altmetric and PlumX to capture broader altmetric data. For journals still working to accrue a JIF, such as newer open access titles, or for journals in disciplines with lower citation rates overall, altmetrics can be valuable markers of alternative impacts. Going a step further, journals can also use altmetric data to inform their promotion strategy.
Article-level citation metrics
New to the realm of article-level impact metrics are efforts to display individual article citation counts and the links between citations. There’s little question that citations will remain a leading research impact indicator. However, as noted above, articles’ citation impacts can be largely exaggerated or muffled in JIF-based assessment depending on the size and scope of the journal they are published in. Individual article citation counts have the potential to provide more accurate impact pictures.
Perhaps the most widely recognized new research index with an article citation count tool to enter the impact arena is Digital Science’s Dimensions, launched in 2018. Publishers can now adopt Dimensions Badges to display total citation counts for each of their articles as well as relative and field citation ratios. Other tools to collect and display article citation data include Crossref Cited-by, which enables Crossref members to discover who is citing their content and to show their article citation counts, and Scite, a machine-learning platform that analyzes citations to see if they are supporting or contradictory and offers badges to display that data.
For journal publishers, the potential of citation tracking tools like Dimensions and Scite goes far beyond displaying article-level impacts, and that is certainly not the primary purpose of either tool. Rather, Scite aims to provide greater context around article citations, and Dimensions is positioned as a research information database with the potential to support article discovery and facilitate a more open and connected research data infrastructure. Should Scite develop a public-facing searchable database, it could soon serve a similar purpose.
The new TOP Factor
In addition to article-level JIF supplements, there is also a new journal ranking system to watch that is one of the most direct potential JIF competitors to hit the scholarly publishing scene. In February 2020, the Center for Open Science launched TOP Factor, a journal assessment system based on the Transparency and Openness Promotion (TOP) Guidelines, which consists of eight publishing standards to improve research transparency and reproducibility. The new TOP Factor demonstrates the degree to which journals are complying with the TOP Guidelines. Journals are scored in each of the TOP Guideline areas on a level of 1-4 (Level 1 indicating minimal compliance and Level 4 indicating full compliance).
The TOP Factor is certainly very young, but there is early indication that it may receive wide uptake. Currently, over 1,100 journals are implementing TOP from publishers including Nature Research and Cambridge University Press. Additionally, the TOP Guidelines have over 5,000 signatories. As a scalable qualitative journal assessment measure, TOP Factor has the potential to help level the JIF playing field, and it could become a resource for funders working to determine whether journals adhere to their policies. Journal publishers can learn more about the TOP Factor and how to start implementing it here.
Is now a turning point in research assessment?
Whether the JIF will ever be supplanted is yet to be seen, but it’s clear that alternative research assessment options are emerging. As many have argued, the ongoing reign of the JIF, despite so many calling out its flaws, is a classic collective action problem. Without sufficient acknowledgment of and incentives to use alternative impact indicators, wide-sweeping change in the research culture is not likely. However, recent funder policy changes, like those encompassed in the latest installment of the REF and the new Plan S initiative, could be the push needed to put the slowly turning wheel towards alternate research assessment measures into more serious motion. The rise of open access and concerns around the equity of the future OA publishing landscape have also brought the need for a rehauling of the current research incentive structure to the fore and will likely only continue to exacerbate it.
Are you implementing any of these JIF alternatives, or are there others that you’re focused on? We’d love to hear your thoughts in the comments section!