The Costs (and Benefits?) of Constant Counting

I’m a 24; well, only on Google scholar (the more inclusive research “platform”). Otherwise, on Thomson Reuters’ Web of Science, I’m a 12. For those fluent in the language of academic metrics you will know immediately what I am referring to: my h-index, a number that is calculated based on the subset of my publications such that h publications have been cited h times. In other words, according to Google scholar, I have 24 publications that have been cited 24 times. The h-index, therefore, provides a shorthand metric of academic “success,” a way of combining an assessment of productivity and purported impact all in one number. Given that companies like Google and Thomson Reuters make these h-indices very easy to calculate online (in fact, they calculate them for you) and therefore very easy to compare yourself to your friends and colleagues, they are particularly seductive. But like all forms of seduction, our involvement in these comparisons brings both pleasure and pain, and can disguise truths as well as reveal them. In this column I reflect on the growing trend of using metrics to evaluate academic success, summarize what I learned from asking department chairs about the role of metrics within their departments, think through the costs of constant counting, and discuss ways of keeping metrics “in their place.”

Certainly using quantitative tools to measure research productivity and impact is nothing new. A recent Professional Geographer[1] article traces the first attempts to quantify rankings of departments and influence of individual faculty members to the 1960s; before that comparisons were based on reputations and judgments of scholars. Tracking citations of articles was originally a tool for understanding the history of ideas; it was only with the development of commercial online databases such as Thomson Reuters’ Web of Science in the late 20th century (and later Elsevier’s Scopus and Google Scholar) that these measures took on a salient currency within the global, neo-liberal, academic marketplace.[2] Much has been written about the ways these “platforms” differ (e.g. the Web of Science has only recently and very selectively started including books, hence the reason my h-index is considerably lower than on Google Scholar, a database that includes books and book chapters), their biases (e.g. limited if any inclusion of non-English language journals), and their inherent limitations (e.g. counting citations is only a measure of popularity, not necessarily impact). As geographers it is particularly important to understand these differences/biases/limitations, given that we inhabit a world that includes different forms of knowledge production and dissemination and different cultures of citation.

To begin to understand the impact of these metrics on Geography I sent out a message on the department chair’s listserv asking whether they had noticed an increased attention to metrics in their department and university, and if so, in what ways those metrics were impacting their department. I heard from 25 department chairs, and overwhelmingly (70%) the response to the first question was yes (a noticeable increase in attention to metrics); some smaller universities and/or more teaching-oriented universities were the only ones to respond no. For those who answered affirmatively, the impact of using these metrics varied considerably; most interesting was the ways in which they could be manipulated to support particular goals. Not surprisingly, some strong geography departments were happy to use measures like the h-index to promote their department vis-à-vis other less strong departments within their universities; while departments that are hoping to move into the ranks of the top-tier are using metrics to gain national attention by comparing their faculty and department to nationally-ranked departments. So it was the level at which these metrics were being used as forms of comparison and competition that was important. Many department chairs were vehement that the most detrimental form of comparison was intra-departmental; that is, using metrics to compare faculty with each other. The seduction of metrics like the h-index is the ease with which differences in citation cultures and forms of knowledge production can be flattened within a second, creating false comparisons – what one department chair referred to as “caustic” problems – and thereby undermining a key strength of geography, its intra-disciplinarity.

All department chairs mentioned the ways in which such metrics need to be “put in their place” and considered against and within other, more holistic and qualitative forms of judgment and comparison. Some questioned their use all together. Are there alternatives? Well, don’t be fooled by the interestingly named altmetrics, a fairly new term that refers to metrics based on, for example, the number of times an article has been viewed online, downloaded, tweeted about, mentioned in blogs or Google+ (for more see: http://chronicle.com/article/Rise-of-Altmetrics-Revives/139557/). These metrics certainly provide data much faster than citation indices, but the issue of what really is being measured is even harder to discern.

Some have questioned what living within an audit culture and the fast-paced neoliberal academy is doing to us as scholars and people. Sociologist Roger Burrows suggests that at the root of many feelings of discomfort about this constant counting and comparison is that we are all implicated in it. Even when we “attempt to resist,” Burrows argues, “we know that not playing the ‘numbers game’ will have implications for us and our colleagues: ‘play’ or ‘be played.’”[3] Geographers too have questioned the costs of counting in terms of the quality of scholarship that it produces and the potential harm it can cause to ourselves and our communities as metrics push us to work faster with instrumental goals (to publish a lot in high impact journals), and to be constantly comparing ourselves to our peers. Working and writing collectives have been formed to challenge the individualism and competitiveness built into metrics like the h-index by emphasizing how all scholarship is ultimately collaborative (see for example the SIGJ2 Writing Collective: http://onlinelibrary.wiley.com/doi/10.1111/j.1467-8330.2012.01011.x/full). And members of The Great Lakes Feminist Collective (http://gpow.org/collective/) have analyzed the sources and effects of living with constant counting, and put forward 11 strategies for slow scholarship as collective action in order to, in their words, “recalibrate and change academic culture.”[4]

As discussed in previous columns, we are a discipline that must always act strategically to be in the ‘game’ (and therefore we are compelled to keep counting), while understanding that our strengths lie in our differences. Some of us work individually, some in teams; some work on books that have relatively lengthy gestation periods, while others can produce articles within weeks. Some will compete for the highest impact, while others will challenge the system that creates the competition. As long as we avoid the “caustic” and invidious interpersonal and intra-departmental comparisons and as long as we understand and carefully weigh the costs and benefits of constant counting and comparing, we should be able to live and care for each other in this academic community we call geography.

In the course of writing this column I found errors in my Google scholar list of publications; some articles were included that are not authored by me (my name was associated with the pieces, and somehow the name was re-formatted as an author). When I figure out how to edit it, I believe I will no longer be a 24. I can live with that, but then I’m an established scholar and I don’t have the weight of tenure or promotion hanging around my neck. For many others, the costs of constant counting are indeed very high. I’m interested in hearing your stories and your thoughts on this important matter.

DOI: 10.14433/2015.0009


 

    Share