A quick and dirty REF

I am currently engaged with working on the REF submission for my Institute. I fully understand why we do this, in cash strapped time in particular, research money will make a big difference. It is also almost impossible not to play the game if you want to be recognised as a research institution and attract staff.

But it is a horrible, pernicious process that seems deliberately designed to kill innovation, perpetuate existing models, waste time, focus energy on gaming the system and keep publishers happy. If you wanted to design a process with the sole aim of restricting the adoption of new methods and technology, then you could do a lot worse that the REF.

Not every country has an equivalent of the REF, so we shouldn't view it as a necessary evil. But, let's say that for whatever reason, something like it is going to persist. Two questions then arise: can it be better and can it be easier?

I'm not going to answer the first of these questions. It would take too long, and I've covered it lots of times before. But in a conversation with Tony the other day we pondered how the REF might be made easier and a hell of a lot cheaper. The total (estimated) cost of the 2008 RAE in England was £56.7 million, which compares with the £1.5 billion of QR research funding. They estimate costs for the REF will be similar. My guess is this figure is low, and may not take into account all of the meeting and individual time associated with the process, and probably not opportunity costs also. In a review of the 2001 RAE, for more teaching oriented institutions it was found that the "RAE is over 16 times less efficient than the norm". I don't think the REF can be argued to be more efficient as it largely follows the same methodology.

So here is my (well Tony's mainly) proposal for a quick and dirty REF:

  • Research councils monitor where money goes and evaluate research projects
  • Publishers have databases of authors and citations

The REF then centrally datawarehouses these sources, adds in various weighting factors, creating a score for individuals, which is aggregated for the institution to give their REF evaluation (maybe by subject or maybe across a whole institution).

It uses proxies for research quality, and is focused on research funding and formal publications, but hey, so is the current REF. If we're going to be forced to play that game, then couldn't we at least do it in an efficient manner? It would at least free up a lot of time to you know, actually do research.

It would at least be worth doing a pilot based on this to see how far it differs from the very labour intensive system we have currently, don't you think?

 

2 Comments

  1. Colin Smith says:

    But how would the research councils go about evaluating research projects, and would it cost any less money?
    Publishers have the databases of authors and citations, but there was outcry from many academics at the start of the REF period when HEFCE wanted to place more emphasis on bibliometrics. Many didn’t want it, and HEFCE themselves dampened this down after the pilot exercise when it was found to be very hard to implement in a fair way across all disciplines.

  2. RebeccaF says:

    Another take on this idea, this time from the LSE’s ‘Impact of Social Sciences’ blog:
    http://blogs.lse.ac.uk/impactofsocialsciences/2011/06/10/ref-alternative-harzing-google-scholar/

Leave a Reply

css.php