First up, exciting news! GO-GN have published their annual research review. Led once again by Rob Farrow, this contains reviews of a number of papers in the open education space. It’s not intended to be an exhaustive literature analysis, but rather a selection of articles that we think cover some of the main areas. They are reviewed by members of the network and it’s an excellent example of the many hands makes light work principle of co-production. It’s worth a read all the way through for anyone interested in OER or OEP.
For the review I took on three MOOC papers. Individually they were all fine papers, and well written and researched. But overall, I was left with the feeling of “is that it?” Next year marks a decade since “The Year of the MOOC” and after all that disruption (sooo much disruption), and death of universities, what we actually have is less “Massive” and more “meh”. Allow me to explain…
The first paper was Castaño-Muñoz, J. & Rodrigues, M. (2021). Open to MOOCs? Evidence of their impact on labour market outcomes. This looked at whether participation in MOOCs has an impact on employability for the participants, which you’ll recall was a big claim of MOOCs. The method they used was to focus on two very employment oriented MOOCs in Spanish. They used two surveys, one in 2015 prior to the MOOC and one again in 2017. This more longitudinal study is rare, so it’s a very useful piece of research to undertake. Their main findings are that MOOC participation had no impact on wages but did increase the likelihood of workers continuing to work at the same firm and performing the same job.
The second paper was de Souza, N. S. & Perry, G. T. (2021). Women’s participation in MOOCs in the IT area. The authors examined data from over 4,000 learners across four MOOCs on a Brazilian platform, to examine whether women’s student profile, persistence and grades differed from those of men studying the same MOOCs. In general they found that difference between men and gender was not a factor in motivation, performance and persistence. This is an example of where a no difference finding is actually interesting (although the authors note that the common trait across MOOCs of participants being relatively privileged was seen here also).
The third paper was Li, H., Zhao, C., Long, T., Huang, Y., & Shu, F. (2021). Exploring the reliability and its influencing factors of peer assessment in massive open online courses. This paper examined the reliability of peer assessment in MOOCs. You may remember that the large scale and lack of formal support in many MOOCs led to many people proposing peer assessment as a solution to the scale issue. By examining over 5,700 submissions, across 18 assignments in three different MOOCs on a Chinese platform, the authors investigated the reliability of peer assessment in the MOOC context. They report that peer reviewers tended to give scores at the extremes and that peer assessment was not particularly reliable. They conclude that peer assessment should avoid being used as a summative assessment method in MOOCs. They point out that this is not the case in formal education, and that is perhaps not a surprise, as peer assessment requires advice and support to get right.
I admired all three of these papers, they were meticulous and had interesting findings. They’re my kind of papers. But given the hype we had all that time ago, collectively they ask the question was it all worth it? We have online courses that don’t revolutionise employment, don’t democratise education and whose pedagogy is flawed. Of course, you could find examples to counter these, they weren’t selected to represent all MOOC findings. But they do feel typical of the sort of results we get now. And that should be a lesson for the next ed tech revolution – when it all washes out you’ll have some quite interesting findings, but hardly any of the initial claims will still be left standing. Vive la revolution.