5 things I think about Learning Analytics
<Ed Techs go in search of data in the wild>
I am at the 1st Learning Analytics conference in Banff, which has been interesting. I came not sure of what it was, or what my take on it was. The conference has been good, very interdisciplinary in nature (for which you can read 'I didn't understand some of it'). I'm still not sure about a lot of it, but here are five things that have occurred to me over the course of the past few days. I don't believe they're strong enough to say I've learnt them, but rather they are things I now have come to a tentative viewpoing on.
1) Analytics for learners is distinct from analytics of learners. Erik Duval gave a good talk which listed some approaches to empowering learners by providing data and visualisations to them. I think this is potentially very powerful, both in formal and informal settings. Showing learners where they are in respect to others, or giving them recommendations in terms of resources or people, or providing an overview of their activity that helps identify aspects they need to work on, etc. All of these seem like useful things to do. What I am less sure about is the analysis of learners in institutional systems because it will allow them to make efficiency savings.
2) There is a range of intentionality and effort. Tony was talking about the types of things you can build by simply taking one piece of data and then plugging this into various tools. The example that stuck for me was the way by simply picking up on a hashtag he could create a representation of a community, construct a blog roll, mine delicious links for of the people in that community to find resources, etc. So from one simple action we can at least infer a good representation of a community. And to join it all the individual needs to do is tweet with that hashtag. This is distinct from the development of very specific tools, such as a virtual machine that sits on a student's machine and they do their course work within it, or a tool like Cohere. Here the proposition is that we will give you a tool which is useful, and from it we will be able to gain richer data. But the effort required is much greater. I'm not suggesting one approach is better than another, merely pointing out that they are quite distinct. I think our instinct is to usually go for the latter for rich data but the former gets at a wider network.
3) Analytics may reveal some uncomfortable truths. A couple of presentations suggested that when you look at the data then you see that students either don't spend very long in your carefully created content and hop off to Google to find their own, or that doing some activities had no impact on exam performance. This prompted David Wiley to ask 'why do we do them then?', and I suspect analytics may reveal many such uncomfortable moments for course designers, in a way that end of course surveys don't currently. You know that resource you spent lots of money on and thought was great? Students spend 15 seconds on it. That big activity we devised? Has no impact on performance. The real question then becomes, what do you do with this knowledge? It may be that you still wish to persist with some elements because we believe in them pedagogically, or because they develop certain skills. But it may also mean that we find the way we think students study our courses and the way they actually do are quite separate.
4) A little knowledge may be a dangerous thing. Related to the previous point, I also worry that interpreting this data into appropriate action will be a difficult task. And at some point the data may fall into the hands of a senior manager or accountant who will look at it and make a simplistic conclusion. One can imagine decisions such as halving the course staff only drops five percent points in average, or reducing course content by one third results in only 10 more students dropping out, so let's implement the cuts. But there is a bigger picture beyond the pure data, and so viewing it in a holistic and contextual manner is essential.
5) We shouldn't put off ethics issues. There are obviously some rather Orwellian overtones to learning analytics. I don't think this need be the case, but there is a strong commercial interest in this and there is also an element of 'let's record everything because we can'. But I think there are also powerful benefits for the learners too, and whenever we are online we are casting off data anyway. For the academics and researchers we should at least draw up a set of principles from the start that we adhere to. If others want to use data for nefarious means, that's up to them, but here are the things we believe in. For example this could include the premise that analytics is performed with the benefit of learners as a priority, that we are always transparent, that the learner owns their data, etc (Erik pointed to the 4 principles of the now defunct Attention Trust which are a good starting point).
So that's the thoughts I have for now. I also learnt that Banff is a) beautiful and b) flipping cold.
I think that in two years, ‘analytics’ will no longer be a buzzword in common use. The funding agencies will realize that it was not as much as a solution as they had hoped and then they will stop talking about it and then shortly thereafter all the ‘experts’ will stop talking about analytics and there will be no more conferences on analytics.
The conversation will then move on to some other magic, futuristic technical solution buzzword. And there will be new conferences about that new topic that have zero real teachers as attendees because the travel costs are so expensive. And then three years after that the pattern will repeat itself again.
I wish that we could find a way to include more teachers in discussing the futures of educations. But alas, they might bring up some inconvenient truths.
Ooh Chuck, I think this has brought out your cynical side 🙂
On the travel thing I disagree – to be fair George ran a free, open, online course too, and streamed the conference live, so I think accusations of elitism are unfair – at least compared to any other subject that has conferences.
But on the buzzword thing may well be true. I think we’ll see a typical patternnof over- hype (it will solve all your problems), disillusionment, corporatisation and then acknowledgement that although it didn’t do everything people once planned, it does do some useful things.
I still think the analytics for learners route will be potentially powerful, particularly for informal learners.
Not sure if it’s even more cynical, but it seems to me that the prime audience for this isn’t teachers. Not sure we should be bothering them with this at all.
The magic words “student retention”, however, will get you the undivided attention of quite a few Vice Chancellors in the UK. Losing students is a phenomenally expensive hobby here. Any tool or technique that gives a handle on that issue is worth pursuing. Or does that not fit under ‘learning analytics’?
It’s definitely learning analytics, and I agree. It is also part of my anxiety around point 4). But if we take an uncynical view then if data patterns help spot students who are struggling before it gets too bad, and where a little intervention may be the thing that helps them and keeps them studying then it is probably a win-win. Isn’t it?
@Charles The word might be gone in two years, but from some of the things we saw over those three days, those ideas are going to be here for a very long time and for just the reason that Wilm suggets. Retention.
The best of the work that I saw, and the most interesting to me, was where the data was sent directly to the student. If you can take connections data, and present it transparently to the student (or as transparently as you can) you give them a tangible tool to help them understand what they are doing.