The REF – a digital scholarship perspective
Having given an overview of the REF in my last post, in this one I will provide a commentary on it from a digital scholarship perspective.
As readers will probably know, I’m not a fan of such exercises in general – the inevitable experimenter’s effect phenomena comes into play, particularly when there is considerable money involved, with the result that we don’t encourage new, exploratory types of behaviour. But let’s put aside these more general reservations, and look at the REF proposal itself. My particular take on this is to what extent does it reward, recognise, encourage activities which we might broadly term digital scholarship?
From this perspective the REF starts well with the aim to “support and encourage innovative and curiosity-driven research, including new approaches, new fields and interdisciplinary work.” So there is a very explicit aim to ‘support and encourage’ new approaches, new fields, interdisciplinary work. I was rubbing my digital scholarship hands at this point.
Similarly their definition of research is sufficiently broad to encompass digital scholarship: “a process of investigation leading to new insights effectively shared”. That phrase ‘effectively shared’ surely harks at blogs, social networks, data visualisation, wikis, etc.
So feeling encouraged, I read on. Sadly the rest of the document does not live up to these opening intentions, indeed it seems to actively undermine them in places.
Making digital activity explicit
Given that part of the aim of the REF is to place UK research at the fore and to demonstrate both its excellence and relevance to society, I would suggest that encouraging new forms of scholarly activity would be an explicit aim. Particularly as this would tie in with the aim set out in the Digital Britain report about establishing the UK as an economic hub in the digital age. Yet anything resembling digital scholarship is noticeable by its absence in the document.
It is noticeable for instance that the two drivers for change to the RAE were to make it less cumbersome and to have a unified
approach. These seem rather unambitious – I would have liked to see
something like ‘to recognise the changing nature of research and
dissemination in a digital age and to both reward and encourage this’.
For example, it does make a gesture towards moving away from the traditional article as the sole output when it states
“All types of outputs from research that
meets the Frascati principles (involving original investigation leading
to new insights) will be eligible for submission. This includes ‘grey
literature’ and outputs that are not in conventional published form,
such as confidential reports to government or business, software,
designs, performances and artefacts”
But this would have been a prime opportunity to explicitly recognise new forms of output – blogs, video, podcasts, etc.
The wrong metrics
Having looked at the possible use of metrics to inform the panels they conclude that they are not robust enough, but that some citation metrics will be used. These however are limited to 3-4 pre-approved databases. This is not even a forward step for digital scholarship – by limiting it to these databases and then suggesting that staff are selected on this basis, they are effectively limiting outputs to journal articles.
Although they do argue that “This approach to the assessment of outputs
retains scope for the assessment of grey literature and work published
in non-standard forms (for which citation data are unlikely to be
available)” my suspicion is that the presence of metrics in the sciences will effectively become a selection filter.
There is no suggestion here of embracing the broader world of data and metrics which new forms of activity would be well represented by.
Despite it being mentioned in the high level aim of the REF, interdisciplinary research will not be well served. They are reducing the number of units of assessment, which will mean more researchers being forced into inappropriate categories. In addition they state that fluidity between the units should be reduced:
“For the REF we propose to have
substantially fewer UOAs with fewer fluid boundaries between them than
in previous assessment exercises”
The REF must be the only body that feels the way to embrace the digital age is to have less fluid boundaries.
As with the use of citation metrics restricting outputs to journal articles, they anticipate the criticism of interdisciplinary work:
“we aim to ensure that whichever panel
interdisciplinary research is submitted to, there will be effective
mechanisms for ensuring it is reviewed fairly by people with
But as with the outputs, the actual proposal seems to undermine this, and merely stating it as an objective will not ensure that it happens.
I think the inclusion of impact could be an area that digital scholars can excel at – we have data about blog readership, numbers of views and embeds on videos, podcast downloads, etc. In Appendix D they list some of the evidence of impact that will be admissible. These are disappointingly conservative however – eg Staff movement between academia and industry, Research contracts and income from industry, Research income from government organisations, etc.
Openness doesn’t get a mention in the REF, and you would have liked to see this as a key theme. Indeed there is an element of a closed world of researchers that pervades much of the document, despite the talk of impact and relevance. There is no mention for instance of recognising data as an output that should be released. The use of citations is limited to pre-defined ones that are purchased. There is no encouragement to publish openly, or the use of open APIs to explore different metrics.
Overall I found it a highly frustrating, deeply conservative and mildly schizophrenic document.
I wonder if part of this schizophrenia arises from not wanting to be seen to be directing research too much, or a denial that such exercises do exactly that. I think they should acknowledge the experimenter effect (it is ironic that a research exercise ignores it) and embrace it. If we are to have exercises such as the REF, which effectively control the direction of UK research, then would it not be better for them to have some worthy goals and vision that would see research be competitive, highly regarded and relevant?
So an opportunity to have a vision that encourages new forms of scholarship, to embrace the potential of digital technologies, to make openness a central theme in UK research is missed. What we have is a curious mix of strategy document and justification of the difficulties of the process. Having decided to measure something they now come up against all the difficulties that entails, such as imposing categories, regulating workload, etc. The process becomes the artefact and thus instead of promoting interdisciplinary work, fluid boundaries, new forms of output which a strategic proposal would surely do, all of these are curtailed because that makes the process more manageable.
As a document it seems to consistently undermine its own ambition, and whenever it approaches new forms of scholarship it veers away at the last moment and reverts to the comfort of what it knows best.
testing comments again
Pretty much agree with all of that, Martin. I was completely unsurprised (as were you, I guess) that the REF was so conservative. I do see, however, why they base their evaluations so heavily on journal articles as the peer review process provides appropriate checks and balances for academic rigour, novelty and so on. This much I’m sure is obvious.
My feeling is that the whole business of academic ‘publishing’ has to be revised. The standard peer-review process is glacially slow and cannot keep up with the pace of understanding new media. It is likely to take at least 2-3 years, often longer between deciding on a research question and the final paper being published. So we should expect to find papers on Twitter published in 2011 when we will have all moved on to some other medium. If wiki-based peer-review works for high-energy physicists why can’t it work for other disciplines?
Maybe it is because high-energy physics is more of a community. The goal of the community is to get good ideas out fast to progress the field. I get the feeling that other disciplines such as psychology (the discipline with which I’m most familiar but the same probably applies to other social sciences) have a goals that are more focused on individual research groups. Whether this is due to psychology’s lack of a weltanschauung (there, worked that one in) or due to other reasons I’m not sure. But I do know that it we want change it is us that has to do it.
Hi Will, yes, I have a ‘reviewing peer review’ post brewing. The delay is not the only problem – there is the issue of the peer review process becoming something of a game also I feel – we know what we are supposed to say as reviewers and as authors we make some perfunctory changes and then it goes through. Having open comments (as in pre-publication on blogs) is a lot better I think because people can see the objections and your responses. The peer-review process also limits the type of papers we get – the process seems designed to strip out anything resembling interest in a paper. While this is suitable for some work (eg medical research), in other places it’s ok to have an opinion.
And while it is up to us to change it, if the REF is the means by which you get funding and promotion then it actively discourages this kind of experimentation and change.
I’m kinda ranty about it aren’t I?
Thanks for your thorough – if depressing – post.
My question is, what can we *do* about it?
Lobbying via social media is preaching to the converted, but it might give an indication of the strength of feeling and the sheer number of people who see the need for change.
Are there more modern and appropriate systems operating elsewhere in the world that can be used to showcase best practice, or perhaps we can link the debate more pro-actively to related high profile issues such as tuition fees/quality of university services?
well commenting on WriteToreply is a start I guess. Also I think we can start making our case within institutions. If we have highly regarded people (eg Michael Wesch) who we can cite as examples of people who would be completely overlooked by REF metrics it begins to show the flaws. Also if we can work at developing our own models (open publishing, metrics) then we have some models to offer as alternatives.