Well, my previous post on data for MOOC completion rates caused a bit of a kerfuffle on Twitter. It was interpreted by some as saying "ONLY completion rates matter". And also of not taking into account other factors such as what learners who don't complete get from a MOOC. That seems rather like criticising Alien for not being a rom-com to my mind – they're doing different things. This research was showing one aspect with the quantitative data available. It is part of a bigger picture which ethnographic studies, surveys and more data analysis will complete. It wasn't attempting to be the full stop on MOOC research.
Anyway, here is another graph that Katy created, showing attrition rates of active users (those that come into the course and do something, not just those who complete assessments) across disciplines:
That's a pretty consistent pattern. If we saw it nature we'd give it some name like "The MOOC attrition law". My interest is as a course designer, so given that the drop-off pattern seems fairly robust, what does it mean for design? (Doug will have issues about the power-lawness of this)
I think there are two responses (but maybe you can think of more).
Design for retention
The first is to say that completion is a desired metric. There may be courses where you really do want as many people as possible to complete. Imagine you were running a remedial maths course for instance, then it won't help your learners much if they only cover a third of the subject matter they need for whatever purpose (Bridge2 Success got learners through maths so they could get onto an employment program, so completion was very important here).
In this case you need to address the 'problem' of drop-out, because it is a problem for you. There might be a number of ways you do this: by adding in more feedback, using badges to motivate people, creating support structures, supplementing with face to face study groups, breaking your longer course into shorter ones, etc. The point is that you design in features that aim to improve completion.
Design for selection
The second design approach is to say that completion isn't an important metric. Here you accept the MOOC attrition law and design the experience with that in mind. I have some sympathy with Stephen Downes when he says no-one finishes a newspaper but we don't talk about people 'dropping out' of a newspaper (I've heard him say this but can't find a link – anyone?). So even to talk about drop-out is to map the wrong metaphor to MOOCs. His analogy breaks down a bit however because not all readers drop out at page 7 of a newspaper, they dip into different sections. People tend to drop out of MOOCs by week 3. It's not as if they're coming in and doing a bit from week 7 and a bit from week 5, and then leaving. They're simply not getting to those later weeks. And even if you are of the 'completion doesn't matter' camp, I'm sure most course designers don't think the content in week 5 is half the value of that in week 1.
So, in this design approach you might break away from the linear course model, to allow people to do the 'newspaper' type selection. A course might be structured around themes for instance, and each one around largely independent activities (I tried to design H817open a bit like this). So in this case completion really doesn't matter, learners take the bits they want.
In both cases I would suggest that the completion rate data is useful for you. In the first case you know what type of completion rate to expect, and in the second one it drives you to be more innovative in design approach. And that's the point about the research – it helps inform decisions.
By the way – this is my fifth blog post in 5 days. Just in case Jim Groom berates me for not blogging often enough…