Bruno’s Last Walk

In the autobiography I will never write, a chapter will surely be devoted to the dogs of my life. From the Jack Russell who used to push me around in a trolley when I was a baby, to the cross breed rescue dog my wife and I got as an obvious, but unbeknown to us, trial run for having a child of our own. Amongst these standing as the most loyal and singularly devoted will be Bruno, the Staffordshire Bull Terrier I adopted from Cardiff Dogs Home 3 years ago.

As some of you may know, this was just after my divorce. I wasn’t sure I could manage a dog on my own, but the inevitable pain of marriage breakup had been exacerbated by losing a previous dog to cancer and sharing custody of Pip (who peeks out in the banner of this blog). So, I opted for an older dog that might not need as much attention as a pup. Bruno was 11 years old, and had been in the home for over 6 months. He was a bit deaf, and being a Staffie, not a very popular breed for people with children. They didn’t know anything about his past, but there was some signs it hadn’t been a good one. A couple had adopted him previously only to be approached by two menacing blokes who insisted he was their dog and they should return him immediately. Feeling intimidated they took him back to the dogs home, only for the declared owners to leave him there unclaimed. So he languished in the kennels for months. An old fella who found himself unwanted, in my self pitying state it felt like a kindred spirit. And indeed he was, we enjoyed doing the same things – eating, napping on the sofa, snoring, going to the pub.

From the day I brought him home, he never wanted anything else other than to be by my side. I would work and he was there by my feet, I watched TV and he curled tightly in next to me, in the morning I was awoken by him bashing the door open and jumping on the bed. I had never considered owning a Staffie before, put off by their reputation as the drug dealer’s dog of choice. But like most Staffies he loved people, wasn’t too bothered about other dogs and was an ideal pet. I made a deal with him on the day I adopted him, that I was at home most of the time but he would have occasional longish days on his own (or with a dog walker) and he would go into kennels when I went away. It may not be perfect, but it was better than being in the dog’s home. He assented to the terms readily, and was never a problem (apart from that time he ate a tub of Quality Street, but we can overlook that as it was Christmas). My dog walker regularly left me notes about how much she adored him.

Last year he suffered two strokes, when he lost the use of his back legs, and his remaining sight. He recovered his ability to walk (if prone to the occasional collapse), but was now completely blind and deaf. But, as long as he had a slow walk every day, and could find me in the house, he was happy enough. Unfortunately his condition deteriorated, with a form of dementia setting in, occasional vomiting, passing blood. He became increasingly anxious without me, even if my daughter was with him, would become lost in the house, wake up in the night in distress and forgot his toilet training. I spent the last 6 months working downstairs on the sofa so he could be next to me and settle down. At ALT-C I got a call from the dog sitter that he was very distressed and I ended up having to commute to Manchester daily so I could be with him at night. Any one of these complaints in isolation was manageable, but ultimately it was the burden of their relentless accumulation that sapped his spirit.

So, at the end of last week I made the decision to have him put to sleep today. A definite date is a terrible knowledge to possess, ticking off his last night on the sofa, his last meal, like a macabre advent calendar. And today I took him on his last walk and said goodbye. It was time I think, as Howard Jacobson says in his piece The dog’s last walk, he had the sense of “something once strong ebbing from him, the dignity of his defeatedness.” It was an exclusive relationship – he was the first dog that was truly mine alone, and I was the only human he was remotely interested in. He fulfilled an important role for me, and helped me transition to a new, contented phase in my life. In return, I like to think I gave him three years at the end which removed the stain of mistreatment and rejection that had preceded it. I hope that was a fair exchange.

The mathematics of cruelty


Warning: this is a naive politics post, best avoided by those who really know their stuff. I expect someone has written all of this much better than I have and it’s been dismissed already. But, hey, it’s my blog.

A while ago I read Laurence Rees’s comprehensive The Holocaust: A New History. It is not a cliche or an invocation of Godwin’s Law any longer to say that the direct parallels to today’s climate in the US, UK, much of Europe, Australia, Russia and elsewhere are glaringly obvious. Particularly in the rise of nazism and their route to power within a democratic framework. We always used to ask that question “how could that happen in Germany?”, but what we’ve seen, especially with Trump and Brexit is that it is the alignment of different segments in society. The fundamental driver for these right wing resurgences seems to be a deliberate, performative cruelty. A pretty reliable compass for the Trump White House or Brexit negotiations is “which is the cruellest option?” To help me get my head around where we are (and to offer myself some hope for the future), I’ve begun to think of it in terms of portions of cruel behaviour that need to align, and from this, what we need to do to prevent it happening.

Let’s break down the mathematics of cruelty then. The exact proportions of each group may vary, and there will be some overlap. It is overly simplistic view but has the merit of not being tied to one context so is applicable in different settings. As a rough guide, I think we have:

  • The naturally cruel – these people just enjoy being cruel, it’s their main driver in life. Think Katie Hopkins. Let’s say this is about 10% of the population usually. Most people aren’t cruel most of the time. Usually this group is dispersed, and may not even bother to vote, since feeling rejected by the political system is part of their identity. Maybe one day we’ll find a way to reach this group and make them nicer, but generally they are not our target.
  • The latently cruel – these are people who aren’t always vicious, but it can be brought to the surface readily. It can be couched in terms of they’re the victims, so need to be cruel to survive, or playing to fears of culture, immigration, safety. Once aroused they become indistinguishable from the first group, but often other interests will have priority over cruelty. This could be 10-15% of the population. You’ve probably got a family member like this.
  • The cruel for self interest – this group are primarily worried about their own jobs, welfare, housing, healthcare, etc. They can be tricked into thinking cruelty is the best (only even) route to safeguarding this. You just need to be cruel this once and it’ll all be fine. Good centrist politicians are very effective at appealing to this group and steering them away from overt cruelty through playing to discernible benefits. Again, let’s assume 10-15% of the population.
  • The intellectually cruel – these are the people who vote as a form or protest. They aren’t really cruel, but they are more interested in making an intellectual point (eg those on the left who voted for Brexit as an anti-neo liberal protest), which allows them to ignore the more blatant cruelty on display. They probably don’t expect to win either, but want a good debating point afterwards. Let’s say 5% in this group.

If you get all of these to align around a single issue or person, then you’re up around 40-50% of the vote, enough to form a government or win a referendum when you get people who don’t vote, or a divided opposition. It is a rare set of circumstances that brings these together, which is why the past few years have come as a surprise. But it can happen. Boy, can it happen. If we accept this broad classification then there are a number of conclusions we can draw from it.

Firstly, don’t trick ourselves that it can’t happen here, that we are beyond this now. The recent success of the far right can in part be attributed to complacency and smugness. We thought that wasn’t who we were now, and allowed these factions to coalesce by not being vigilant enough.

Secondly, it means you don’t have to convert half the population to your side. You simply have to find ways that prevent a lightning rod forming that attracts all these groups to one focus. You focus on converting those middle two groups primarily to a more reasonable, less cruel course of action.

Thirdly, structures and policy in society and political systems can be implemented to prevent their alignment. This is going on the assumption that a political system based primarily on cruelty is a bad thing. From and ethical and humanitarian perspective that is obviously the case, but even from a more mercenary perspective it is true – such a culture forces people into extremes, and a fundamentally fractured country or society does not function well in the long term.

So, what can we do about it? I think there are two courses of action that follow from this. The first is to prevent the normalisation of cruelty. This particularly influences the latently cruel group. You don’t invite them on TV and if you do, you challenge them so utterly that they look foolish. The BBC’s obsession with Farage because it was scared of looking like it didn’t understand the working class and its abject failure to hold Brexiteers to account is a prime example of how not to do this. Similarly, without Fox News, Trump doesn’t exist. Our media culture needs to be more accountable, and responsible.

The second route is around political structures. The system should be constructed in such a way as to prevent the alignment of these factions. For example, the Brexit referendum was a clear focal point, but if you were going to have it, then it should have been around a clear Brexit proposal. For a start, as we’ve seen, the brexiteers cannot create one, but had they done so, then it focuses the argument on specifics. The self interest group will then be moved away from a vague promise of better things to very specific and factual arguments. Similarly, the biggest flaw with the US system is that Trump could become the Republican nominee in the first place. For the most important job in the world, some experience should be necessary, and any junior political role would have revealed his inadequacies.

These are boring, policy, governance and selection points but through these you construct a system that prevents the mathematics of cruelty adding up. If we survive the next few years, this needs to be our focus going forward.

Lecture capture – don’t fight it, feel it

Lecture capture would be a strange choice of hill for me to die on since I work at a university that doesn’t do lectures, and have no experience of it. But Sheila MacNeill started a twitter conversation about it, and I think it captures some broader ed tech issues, so here I am, weighing in with my ill-formed opinions.

  • Does it benefit students? – students seem to like lecture capture. The evidence on whether it impacts on attendance is mixed, but it is useful for revision, for those who struggle to take notes for whatever reason, and to go over complicated topics. So in general, the basis is that learners who are in a lecture based environment, like it as an additional service. This should be the main factor.
  • Understand the relationship with employment – Melissa Highton has talked about how recent political issues have brought ed tech like lecture capture to the fore. Unions have a role to play here in not resisting its implementation, but in ensuring that when implemented it is not used to undermine strike action or impact upon employment.
  • It reinforces lectures – yes, possibly and I’ve seen this as an argument against it. But unless your institution is implementing a direct strategy to move away from lectures, then lecture capture is the least of the factors. Unless estates are converting lecture theatres to different types of learning spaces and there is extensive staff development under way to move away from it, then assume lectures are around for a while yet, and lecture capture makes no difference either way.
  • It’s not as good as bespoke content – producing specific online learning material is probably better, but at scale this becomes difficult. Some staff will do it, but it is a considerable extra burden, which would require significant resource to realise across an institution. Also, there may be some value in the lecture having an experiential element – students will recall that the lecturer moved over here when they said this, and this was the point where they were distracted in the actual lecture, etc.
  • Use it as a stepping stone – like the VLE it is likely that lecture capture will end up being an end point in itself, rather than a step on the path to more interesting practice. With this in mind, ed tech people should work hard at the start to make it part of a bigger programme, for instance running development alongside for the type of bespoke content mentioned above, or thinking about flipped learning, or making the lecture a collaborative resource, etc.
  • It’s boring – we should be doing more exciting things in education! This is true, but it’s not an either/or, lecture capture is useful to students here and now. Boring but useful is ok too.
  • Evaluation is key – people have lots of views about lecture capture, often based on beliefs or anecdote. So it’s important to evaluate it in your context, particularly for questions such as “does it impact attendance?”, “do students use it?”, “how do students use it?”, “do students who use it perform better?”, etc.
  • It’s not a replacement – some of the objections are that it is a misuse of technology, that if you are producing online learning content then pointing a camera at it is like filming a play to produce a movie. But this is to misunderstand how students use it I think. For them it is an additional resource which complements the lecture, they may miss the occasional one knowing they can catch up, but in general they still value lectures. It’s like students who record the audio of a lecture, they now have a back up.
  • We like it – after being at ALT-C last week, lots of people commented how much they appreciated the keynotes and other talks being live streamed. We didn’t say that live streaming prevented a move away from the conventional keynote, or that it reduced attendance, or undermined labour. So, if we value it, why shouldn’t students?

I appreciate that it’s a complex issue, and no technology should be seen as inevitable, but there is a certain logic to lecture capture. As we found in the OOFAT report, technologies that link closely to university core functions tend to get adopted. Lecture capture is about as close as you can get. In this case educational technologists can help it be implemented in a more sustainable, interesting and sensitive manner. So, in the words of Primal Scream, don’t fight it, feel it.

[UPDATE: Here’s a few references people pointed me to on Twitter, giving a mixed picture of effectiviness:

Witton, G. (2017), The value of capture: Taking an alternative approach to using lecture capture technologies for increased impact on student learning and engagement. British Journal of Educational Technology, 48: 1010–1019. doi: 10.1111/bjet.12470

Edwards, M.R. & Clinton, M.E. A study exploring the impact of lecture capture availability and lecture capture usage on student attendance and attainment. High Educ (2018).

Nordmann, E., & Mcgeorge, P. (2018, May 1). Lecture capture in higher education: time to learn from the learners.]

25 Years of Ed Tech: Themes & Conclusions


Now that I have completed the 25 Years of Ed Tech series (which was actually 26 years, because maths), I thought I’d have an attempt at some synthesis of it and try to extract some themes. In truth, each of these probably merits a post of its own, but I wanted to wrap this series up before the 25 Year anniversary of ALT-C next week. Plus, tired.

No country for rapidity – one of the complaints, particularly from outsiders is that higher ed is resistant, and slow, to change. This is true, but we should also frame it as a strength. Universities have been around longer than Google after all, and part of their appeal is their immutability. This means they don’t abandon everything for the latest technology (see later for what tech tends to get adopted). If you’re planning on being around for another 1000 years then you need to be cautious. We didn’t close all our libraries and replace them with LaserDiscs in the 90s. As the conclusion of the Educause piece I wrote stated “it’s no game for the impatient”.

Historical amnesia – I’ve covered this before, but one of the characteristics of ed tech is that people wander into it from other disciplines. Often they wouldn’t even know they’re now in ed tech, they’re doing MOOCs, or messing about with assessment on their psychology course, and they may spend a bit of time doing it and return to their main focus. Ed Tech can be like a holiday resort, people passing through from many destinations, with only a few regulars remaining. What this means is there is a tendency we see repeatedly over the 25 years for ideas to be rediscovered. A consequence of this is that it sees every development as operating in isolation instead of building on the theoretical, financial, administrative and research of previous work. For example, you probably don’t get OER without open source, and you don’t get MOOCs without OER, and so on.

Cycles of interest – there are some ideas that keep recurring in ed tech: the intelligent tutor, personalised learning, the end of universities. Audrey Watters refers to zombie ideas, which just won’t die. Partly this is a result of the aforementioned historical amnesia, and partly it is a result of techno-optimism (“This time it really will work”). It is also a consequence of over enthusiastic initial claims, which the technology takes 10 years or so to catch up with. So while, intelligent tutoring systems were woefully inadequate for the claims in the 90s, some of that is justifiable in 2018. Also, just conceptually you sometimes need a few cycles at an idea to get it accepted.

Disruption isn’t for education – given it’s dominance in much of ed tech discourse, what the previous trends highlight is that disruption is simply not a very good theory to apply to the education sector. One of the main attractions of higher ed is its longevity, and disruption theory seeks to destroy a sector. Given that it has failed to do this to higher ed, despite numerous claims that this is the death of universities, would suggest that it won’t happen soon. Disruption also plays strongly to the benefits of historical amnesia, which is a weakness here, and the cycles of interest argue that what you want to do is build iteratively, rather than sweep away and start anew. There are lots of other reasons to distrust the idea of disruption, but in higher ed at least, it’s just not a very productive way to innovate.

The role of humans – ed tech seems to come in two guises: helping the educator or replacing them. If we look at developments such as wikis, OER, CMC, blogs, even SecondLife, then their primary aim is to find tech that can help enhance education, either for a new set of learners, to realise new approaches, or sometimes, just try some stuff out. Other approaches are framed in terms of removing human educators: AI, learning analytics, MOOCs. Not necessarily – for example, learning analytics can be used to help human educators better support learners. But often the hype (and financial interest) is around the large scale implementation of automatic learning. As I mentioned in a previous post education is fundamentally a human enterprise, and my sense is we (at least those of us in ed tech in higher ed) should prioritise the former types of ed tech.

Innovation happens – for all the above: change happens slowly, people forget the past, disruption is a bust, focus on people – the survey of the last 25 years in ed tech also reveals a rich history of innovation. Web 2.0, bulletin board systems, PLEs, connectivism – these all saw exciting innovation and also questioning what education is for and how best to realise it.

Distance from the core – the technologies that get adopted and embedded into higher ed tend to correlate closely with core university functions, which we categorised as content, delivery and recognition in our recent OOFAT report. So, VLEs, eportfolios, elearning – these kinds of technology relate very closely to these core functions. The further you get from these then the more difficult it becomes to make the technology relevant, and embedded in everyday practice.

So, that’s really the end of the series.

25 Years of EdTech: 2018 – Critical Ed Tech

Farväl - Goodbye

[The last in the 25 Years of Ed Tech series]

I’ll do a conclusion and themes post (if I can be arsed) of the 25 Years series, but now we reach the end. For this year, I’ve chosen not a technology, but rather a trend. I see in much of ed tech a divide, particularly at conferences. There is the gung ho, Silicon Valley, technology utopian evangelists. This is a critical reflection free zone, because the whole basis of this industry is on selling perfect solutions (to problems they have concocted). This is where the money is.

In contrast to this is a developing strand of criticality around the role of technology in society and in education in particular. With the impact of social media on politics, Russian bots, (actual) fake news, Cambridge Analytica, and numerous privacy scares, the need for a critical approach is apparent. Being sceptical about tech is no longer a specialist interest. Chris Gilliard has an excellent thread on all the invasive uses of tech, not all educational, but there are educational implementations for most of them.

One prominent strand of such criticality is suspicion about the claims of educational technology in general, and the role of software companies in particular. One of the consequences of ed tech entering the mainstream of education is that it becomes increasingly attractive to companies who wish to enter the education market. Much of the narrative around ed tech is associated with change, which quickly becomes co-opted into broader agendas around commercialisation, commodification and massification of education.

For instance, the Avalanche report argued that systemic change is inevitable because “elements of the traditional university are threatened by the coming avalanche. In Clayton Christensen’s terms, universities are ripe for disruption”. In this view, education, perceived as slow, resistant to change and old-fashioned is seen as ripe for disruption. Increasingly then academic ed tech is reacting against these claims about the role of technology and is questioning the impact on learners, scholarly practice, and its implications. For example, while learning analytics have gained a good deal of positive coverage regarding their ability to aid learners and educators, others have questioned their role in learner agency and monitoring and their ethics.

One of the key voices in ed tech criticality is Neil Selwyn, who argues that engaging with digital impact on education in a critical manner is a key role of educators, stating “the notion of a contemporary educational landscape infused with digital data raises the need for detailed inquiry and critique”. This includes being self-critical, and analysing the assumptions and progress in movements within ed tech. It’s important to distinguish critique as Selwyn sees it, and just being anti-technology. These are not pro and anti technology camps – you can still be enthusiastic about the application of technology in particular contexts. What it does mean is being aware of the broader implications, questioning claims, and advocating (or conducting) research about real impacts.

Ed tech research then has begun to witness a shift from advocacy, which tended to promote the use of new technologies, to a more critical perspective. This is not to say that there is enough critical thought around, and the drive for venture capital still seeks to eradicate it, but this series is about when moments in ed tech became significant. 2018 marks that for a more receptive approach to critical perspectives. If the evangelist and critical approaches represent two distinct groups, then sitting inbetween these two is the group we should all be focused on in ed tech – the practitioners in universities, schools and colleges who want to do the best for their learners. And that seems a fitting place to end the series.

25 Years of Ed Tech: 2017 – blockchain

All clear now? (image from

Of all the technologies listed in this series, blockchain is perhaps the most perplexing, both in how it works and in why it is even in this list. In 2016 several people independently approached me about blockchain — the distributed, secure ledger for keeping the records that underpin Bitcoin. The question was always the same: “Could we apply this in education somehow?” The imperative seemed to be that blockchain was a cool technology, and therefore there must be an educational application. It could provide a means of recording achievements and bringing together large and small, formal and informal, outputs and recognition.

Viewed in this way, blockchain is attempting to bring together several issues and technologies: e-portfolios, with the aim to provide an individual, portable record of educational achievement; digital badges, with the intention to recognize informal learning; MOOCs and OER, with the desire to offer varied informal learning opportunities; PLEs and personalized learning, with the idea to focus more on the individual than on an institution. A personal, secure, permanent, and portable ledger may well be the ring to bind all these together. However, the history of these technologies should also be a warning for blockchain enthusiasts. With e-portfolios, for instance, even when there is a clear connection to educational practice, adoption can be slow, requiring many other components to fall into place. In 2018 even the relatively conservative and familiar edtech of open textbooks is far from being broadly accepted. Attempting to convince educators that a complex technology might solve a problem they don’t think they have is therefore unlikely to meet with widespread support.

If blockchain is to realize any success, it will need to work almost unnoticed; it will succeed only if people don’t know they’re using blockchain. Nevertheless, many who propose blockchain display a definite evangelist’s zeal. It is perhaps the nirvana of technological solutionism – a technology that no-one really understands, doing something they can’t quite unpack for reasons they don’t fathom. They desire its adoption as an end goal in itself, rather than as an appropriate solution to a specific problem. If I had the time I’d keep a blockchain tumblr for ridiculous proposals for blockchain solutions. I think the most important aspect of blockchain might be to keep an eye on how it gets used without our knowledge, and what the implications for that are. If anyone tries to sell your VC/Principal a blockchain solution make sure someone tech-smart is in that room.

In the meantime, if you need to blag blockchain knowledge in a meeting, just take a leaf out of this advert’s book and end every sentence with “thanks to the blockchain solution.”

25 Years of Ed Tech: 2016 – The return of AI


[Continuing the 25 Years of Ed Tech series]

I covered this in 1993’s entry, that Artificial intelligence was the focus of attention in education in the 1980s and 1990s with the possible development of intelligent tutoring systems. The initial enthusiasm for these systems waned somewhat, when they failed to deliver on their promise. For example, in their influential 1985 paper, Anderson, Boyle and Reiser detailed intelligent tutoring systems for geometry and the programming language LISP. They confidently predicted that “Cognitive psychology, artificial intelligence, and computer technology have advanced to the point where it is feasible to build computer systems that are as effective as intelligent human tutors”. Yet, by 1997 Anderson was amongst authors lamenting that “intelligent tutoring has had relatively little impact on education and training in the world.” In their analysis they hit upon something which seems obvious, and yet continues to echo through educational technology, namely that the ed tech (in this case intelligent tutoring systems but it might equally apply to MOOCs, say) are not developed according to educational perspectives. They stated that “the creative vision of intelligent computer tutors has largely arisen among artificial intelligence researchers rather than education specialists. Researchers recognized that intelligent tutoring systems are a rich and important natural environment in which to deploy and improve Al algorithms… the bottom line is that intelligent tutoring systems are generally evaluated according to artificial intelligence criteria …rather than with respect to a cost/benefit analysis educational effectiveness.” In short, they are developed and evaluated by people who like the technology, but don’t really know or appreciate the educational context. In this snapshot we have much of the history of ed tech.

Interest in AI faded as interest in the web and related technologies increased, but it has resurfaced in the past five years or so. What has changed over this intervening period is the power of computation. Much if what we classify as AI now is just massive number crunching. That’s not cheating – maybe that’s what we do as humans anyway.

But perhaps more significant than the technological issues are the ethical ones we now face. As Audrey Watters contends, AI is ideological. Neil Selwyn makes a pretty good case for why AI won’t succeed in education, which can be summarised as “education is fundamentally a human enterprise”. However, the concern about AI is not that it won’t deliver on the promise held forth by its advocates but, rather, that someday it will. Or rather that it will in the eyes of policy makers. And then the assumptions embedded in code will shape how education is realized, and if learners don’t fit that conceptual model, they will find themselves outside of the area in which compassion will allow a human to alter or intervene. Perhaps the greatest contribution of AI will be to make us realize how important people truly are in the education system.

25 Years of EdTech: 2015 – Digital Badges

[Continuing the 25 Years of Ed Tech series]

Providing digital badges for achievements that can be verified and linked to evidence started with Mozilla’s open badge infrastructure in 2011. They were an idea that had been floating around for a while – that you could earn informal accreditation for online activity. What the Mozilla work provided was a technical infrastructure, so badges could be linked through to evidence and verified. Badges could be awarded for assessment (passing a quiz), but more interestingly for community action, such as contributing to an online forum.

Like many other edtech developments, digital badges had an initial flurry of interest from devotees but then settled into a pattern of more laborious long-term acceptance. They represent a combination of key challenges for educational technology:

  • realizing easy-to-use, scalable technology – the Mozilla specification provides this, and tools such as Credly make creating and distributing badges reasonably straightforward;
  • developing social awareness that gives them currency – badges may be fun, but for them to gain value they need to be recognised by employers and society more widely;
  • and providing the policy and support structures that make them valuable – we have complicated systems and quality control processes around formal recognition. If employers are to start valuing badges then similar structures may need to be in place to give confidence in their value. And then the question may become, what is the point of them if they’re indistinguishable from formal credit?

Of these challenges, only the first relates directly to technology; the more substantial ones relate to awareness and legitimacy. For example, if employers or institutions come to widely accept and value digital badges, then they will gain credence with learners, creating a virtuous circle. There is some movement in this area, particularly with regard to staff development within organizations and often linked with MOOCs. Perhaps more interesting is what happens when educators design for badges, breaking courses down into smaller chunks with associated recognition, and when communities of practice give badges value. Linked with eportfolios, and transferable credit, then badges can provide a way of surfacing the generic skills inherent in much of formal education. Currently, their use is at an indeterminate stage — neither a failed enterprise nor the mainstream adoption once envisaged, but I suspect we’ll see steady growth around specific enterprises.

25 Years of EdTech: 2014 – Learning analytics

[Continuing the 25 Years of Ed Tech series]

Data, data, data. It’s the new oil and the new driver of capitalism, war, politics. So inevitably its role in education would come to the fore. Interest in analytics is driven by the increased amount of time that students spend in online learning environments, particularly LMSs and MOOCs. Although not a direct consequence, there is a definite synergy and similarity between MOOCs and analytics. Both brought new people into education technology, particularly from the computer science field. I think we can be a bit snooty about this, what are all these hard core empiricists suddenly doing in our touchy-feely domain? But if the knowledge exchange is reciprocal, then this evolving nature of ed tech can be one of its strengths. The reservations arise when it is less of a mutual knowledge sharing and more an aggressive take-over.

The positive side of learning analytics is that for distance education in particular, it provides the equivalent of responding to discreet signals in the face-to-face environment: the puzzled expression, the yawn, or the whispering between students looking for clarity. Every good face-to-face educator will respond to these signals and adjust their behaviour. In an online environment, these cues are absent, and analytics provides some proxy for these. If an educator sees that students are repeatedly going back to a resource, that might indicate a similar need to adapt that resource, offer further advice, etc.

The downsides are that learning analytics can reduce students to data and that ownership over the data becomes a commodity in itself. Let’s face it, the use of analytics has only just begun, and the danger is that instead of analytics supporting education, analytics becomes education. The edtech field needs to avoid the mistakes of data capitalism; it should embed learner agency and ethics in the use of data, and it should deploy that data sparingly.

One of the benefits of thinking about analytics might be simply better communication to students. Navigating the peculiar, often idiosyncratic world of higher education with its rules and regulations can be daunting and confusing. By considering useful dashboards for instance, the complexity of this is surfaced. In this study, simply telling students what degree they were on course for was deemed remarkably useful. It transpires that calculating this for yourself is remarkably difficult, which highlights itself how HEIs can do a lot to simplify and expose their workings for students.

25 years of EdTech: 2013 – Open Textbooks

[Continuing the 25 Years of Ed Tech series]

If MOOCs were the glamorous side of open education, all breathless headlines and predictions, open textbooks were the practical, even dowdy, application. An extension of the OER movement, and particularly pertinent in the United States and Canada, open textbooks provided openly licensed versions of bespoke written textbooks, free for the digital version. The cost of textbooks provided an initial motivation for adoption, but it is the potential of adaption that makes them interesting. Open textbooks are sometimes criticised for being an unimaginative application of the possibilities of open. But they also offer a clear example of several aspects which need to align for ed tech adoption in higher ed.

Firstly, they set out to establish a solid evidence base. They did not just rely on altruism, and statements of belief. The Open Ed group in particular demonstrated that open textbooks were of high quality, and had a positive impact on students. This evidence base makes it difficult for them to be dismissed by commercial interests.

Secondly, through funding from the likes of Hewlett, some professional, long term providers were established who could produce reliable quality. These books looked as good as anything that you bought, they weren’t some quirky DIY effort.

Thirdly, the switching of costs from purchase to production established a viable economic model that is applicable for other open approaches. They could not be dismissed as unsustainable.

These three elements lay the foundation for their adoption and overcome many of the reservations or objections raised. Now the challenge is from this base to start doing the really interesting stuff. As with LMSs, open textbooks offer an easy route to adoption. Exploration around open pedagogy, co-creation with students, and diversification of the curriculum all point to a potentially rich, open, edtech ecosystem—with open textbooks at the centre. However, the possible drawback is that like LMSs, open textbooks may not become a stepping-stone on the way to a more innovative, varied teaching approach but, rather, may become an end point in themselves.

What I like about open textbooks is they don’t seek to remove the human element from education. Make education more affordable, flexible, accessible but still essentially human. Maybe that’s why they don’t attract the attention of venture capitalists.