1. Impact from the Evaluators Eye
If you read nothing else about the UKs Research Excellence Framework (REF2014) and the Impact criterion, then let it be this book.
Not because it is a critique of the REF2014 , it isnt, but because this book is about the most important mechanic of the REF2014 and one that has been vastly overlooked: the evaluators. The evaluators had a mammoth task. In light of no precedent, little experience and monstrous professional and political pressure, they embarked on evaluating an object that was considered a new influence using the very traditional evaluation tool of peer review. Specifically, this book examines how evaluators navigated this object, together, and the importance of group dynamics in attributing value to ambiguous evaluation objects such as the Impact criterion.
I do not question the evaluation outcomes, but by examining how these outcomes were reached, I do question how these evaluators worked. To clarify, it is not just a question of whether these evaluators came up with the right answer or not, but instead to focus how they worked and in the future how they can work smarter.
So while this is not a book about the REF2014 per se, it is a book about what goes on behind the REF2014 and its evaluation processes. This is the evaluators story.
This book is also about Impact.
When the UK government first announced its plans to not only recognise the importance of the societal impact (Impact) of research, but to award funding on the basis of its evaluation as part of the 2014 Research Excellence Framework (REF2014) , there was an explosion of dissent from the academic community. Part of this discontent was based on a fear that a focus on Impact would steer research in undesirable directions, and another part stemmed from misgivings surrounding the nature of Impact, its assessment and how value can be attributed to such a broad concept. Despite numerous studies on the aspects of Impact and its evaluation, understandings of the concept and models of Impact evaluation remain merely theoretical. This book turns the focus away from Impact as the subject of an evaluation, and towards Impact as a process of valuation through the eyes and actions of REF2014 evaluators. Because for me the value of Impact cannot be made independent of the process used to assess it.
So, finally we have peer review, the domain where a value was assigned to Impact. Peer review, as with most evaluations, is a construct. It is not a naturally occurring process, but instead is constructed from the publics need for accountability and transparency , the academic communitys desire for autonomy, and a political need for desirable outcomes achieved through a fair process (Chubin and Hackett ). An ingrained pillar of academic life and governance, group peer review works by allowing for contesting and conflicting opinions about a concept to be played out, and negotiated in practice. All academics are conditioned towards the importance of peer review; we question its outcomes (How could I not get that grant!) but accept them because we believe that our peers and experts have valued our proposals as worthy (or not) based on a shared understanding of what is considered as excellent in research. This shared understanding is less clear for Impact, which is a new, uncertain and ambiguous evaluation object, one that as a concept is forever in flux, and one where our regular peers are not necessarily Impact experts. In theory, peer review appears the perfect tool for evaluating Impact as, during this flux, it provides an excellent forum where competing ideas can be aired in practice. However, as a construct, the practical necessities of the evaluation, where mechanics are used to frame and potentially infiltrate debate, question the purity of the process as expert-driven, as well as the suitability of peer review as a tool to value ambiguous objects.
Combining these three concepts is a difficult marriage to make, but by considering them together I bring the field out of the theoretical and hypothetical, and into an empirical world. For this book, all previous (and current) debates about Impact including how to measure it, what it is and how to capture it are put to the test within a peer review evaluation panel. Within these groups, panellists interpret and define these conceptual debates and meanings of Impact among themselves before producing evaluation outcomes. For this study, I was motivated by an overarching objective of exploring the suitability of peer review as a tool for assessing notions of Impact. Specifically, I focused on whether I could understand how the groups dominant definition influenced the strategies developed to value Impact (the evaluators eye); the extent to which peer review as a constructed exercise helps or hinders the evaluation of ambiguous objects ; and the extent to which the Impact evaluation process was at risk of the drawbacks associated with group behaviours. By considering the attribution of value about Impact as a dynamic process, rather than one that is static and dependent on the characteristics of the submissions alone, this book alters the focus to go beyond sedentary debates about the definition, nature and pathways to impact, and instead look at how notions of research excellence beyond academia are played out within groups of experts. What emerges is a totally different focus of how to understand Impact, one that considers that the real value of Impact cannot be divorced from how evaluators play out their evaluation in practice, within their groups. Viewing the challenges facing Impact evaluation on the group level, rather than solely at the individual evaluator or individual case study level, changes (for the better) the types of recommendations that are available for future assessments.
Why Study Peer Review and Impact Together?
Plenty is already known about how experts straddle the concept of excellence or scientific impact in peer review panels . Likewise, there has been a large amount of new research concerned with models of research impact assessment; however, few pieces of research bring these concepts together in order to study them empirically. As two difficult and, until now, independently considered areas of study, this book has its work cut out for itself in bringing these together. However, this book also testifies that there is no way of understanding Impact that can be separated from the practice of its evaluation and valuation by peer review panels . Within panels, concepts and meanings are assigned to submissions that demonstrate Impact, and the result of these evaluations is as much to do with the social interplay of evaluators as it is with the attributes of the submissions themselves.
Too many studies have focused too much on the attributes of the submissions (REF2014 Impact Case studies) and cross-referenced these with the results of the evaluation, labelling these as examples of Impact without understanding how such assessments were formed. This rather simplistic assumption, where too much attention is paid to a submissions attributes, overlooks the importance of the group-based dynamics around the outcomes. It is somewhat foolish and perhaps nave to assume that the value of difference in Impacts can be determined without considering how this value is deliberated by the peer review panel . In this way, the book takes you on a journey with the REF2014 Impact evaluators as they reason among themselves about what constitutes excellent and, by proxy , valuable Impact.