The Higher Education Policy Institute released a study today that ranked universities by their widening participation stats. You’d expect Russell group unis to do poorly in this, but I bet the Open University, a provider set up to specifically address WP will do well, right? Except, they didn’t include it. I got into an exchange (HEPI Twitter is feisty!) on this, where they defended their methodology. But this was itself revealing, they replied to my criticism about the OU’s exclusion saying:
“To be clear, there is not a valid way of including it in this study as Polar focuses on young people, the data was sourced from UCAS etc etc etc. Much dodgier to wrench an institution in just because we think it might do well. You’ll find lots about the OU on our site elsewhere.”
POLAR as a measure of recording WP is flawed, particularly if you want to measure mature students. The TEF recognised this by including IMD data this year (this also has issues, particularly for inner city where postcode can include widely varying incomes, hence they include both). The message from HEPI seems to be that it’s your fault if you don’t fit their methodology. But inherent in the methodology are assumptions that undermine the very point of the study I think (I should note that HEPI strongly disagree with me, saying that was not the intention of this study).
This report focuses on traditional universities (Birkbeck is similarly noticeable by its absence), and traditional students (young, campus based). If your aim is to argue that widening participation is an important metric (they are sort of promoting a WP league table), then that message is entirely undermined if your definition of WP is, ironically, too narrow. A study that showed how providers who focus on WP perform would be more powerful. This one seems designed to get headlines (it succeeded in that), over making a valuable contribution to the WP agenda. If your methodology is excluding institutions that everyone thinks should be included, maybe it’s worth looking at that method? That’s what I’d be telling a PhD student embarking on this study. The report is titled “Benchmarking Widening Participation”. This has the intention then to become a useful metric, and if so, the exclusion of widening participation institutions from the outset is not just annoying, it’s potentially damaging.
[UPDATE – I extended this post for an article in Wonkhe]