I’ve been reading through bits of Measure and Value edited by Lisa Adkins and Celia Lury. The collection can be purchased as a book and it is also available online here. As well as having a great cover there are some really interesting articles. In fact it’s a really high quality collection built around some important questions about the way that social scientists measure things and understand the relative value of the things they measure. These might seem relatively esoteric questions, but this collection shows how they are at the heart of how social science is done – the editors make a really compelling case for returning to the issues of value and measure in the book’s introduction, they note the particular need for this where new types of data emerge. I wanted to do a short post though on one of the papers which is concerned with research assessment exercises, or the research excellence framework as it is now called. In this paper the focus on systems of measurement is turned back in upon sociology to understand how value is distributed by the assessment of research outputs.
In this piece, called ‘Measuring the value of sociology? Some notes on performative metricization in the contemporary academy’, Aiden Kelly and Roger Burrows analyse the 2008 RAE outcomes. This they argue represents ‘one of the most important metrics’ of the many that now shape higher education. Following a history of research assessment, which includes a table showing all the sociology entries over the 4 previous research assessment exercises, the article makes the argument that there are ‘shadow metrics’ that can be located in the 2008 research assessment outcomes. These shadow metrics can be shown to relate closely to the final outcomes of the exercise. With a complex argument made about how these shadow metrics might be both product and part of the metricisation and performance of higher education.
These shadow metrics include the number of people included in a submission, the level of research income and, finally, citation metrics. These have different apparent levels of influence in the final outcome and grade of the research assessment exercise. it’s probably best to read it to see the details of the analysis. One of the many interesting observations it creates is a list of the journals where sociologist should be if they wanted to increase their chances of doing well:
The piece is about sociology but anyone with an interest in developments in higher education will be able to relate to the findings and observations.
I might post some short reflections on other papers in the collection if I get chance.