AI in education – reality, uses, risks & ethics

Following on from previous talks in the seminar series I’m convening at the Open University, my colleague Wayne Holmes gave the last one on artificial intelligence. I found AI perhaps the most thorny of technologies in educational technology. Firstly, how much of it is hype? Secondly, what can it do usefully and realistically in education? And thirdly, perhaps most significantly, what are the potential negatives?

These three questions are at the core of ed tech in 2018, and nowhere are they more prevalent than in AI. So, I was particularly grateful for someone who knows what they’re talking about to guide us through these. On the first of these, Wayne pointed out the prevalence of AI that is already in so much of everyday life. It’s not a future technology he argues, but over the past decade has arrived, from Siri to games, but we often don’t think of these as AI. He highlights the distinction between ‘general AI’ and narrow AI – the former is being human like across tasks and is a long way off, whereas the latter is targeted at a specific function and is prevalent. I think this distinction is important for discerning hype and reality.

Wayne gave a good overview of existing AI systems that are currently in use in education, from intelligent tutoring systems to chat bots. He ended by addressing the risks, including embedded racism in ‘intelligent’ law enforcement tools and Facebook’s move into personalised learning. The role of ethics in AI is now essential, and he highlights the intersection of education, big data and algorithms, each of which has their own set of ethical considerations, and also a particular set when we combine them. It is this kind of analysis that is missing from so much of computing education, tech start up coverage and and ed tech discussion.

Anyway, his excellent presentation is below:

It’s all about me

fishface
(This is a picture of a fish. I don’t know why it’s here either)

Like many of you, I get asked to do bits of ‘scholarship on the side” – webinars, interviews, podcasts etc. These seem to have come in a burst recently, after not much in the preceding six months. Some of them are parts of interesting series, so partly because you may find these interesting, and also as a means of collecting them for my own purposes, here is a list of recent ‘other stuff’:

Open Education: What Now? – This was a webinar for part of European Distance Learning Week, along with Catherine Cronin. Although we didn’t have time to plan it, the presentations from Catherine and I complemented each other very well as we explored aspects of open scholarships and some of the tensions it now faces.

Edutechnicalities podcast – created by Rolin Mo. Part of a really excellent series he’s created, and he also developed a nice matrix on digital scholarship activity on the back of this.

Openness in Education – interview with Suzan Koseoglu & Aras Bozkurt in eLearn Magazine

Identifying Categories of Open Educational Resource Users – the IJOER took a chapter I’d written with OER Hub colleagues and republished it (because it was CC licensed) and reformatted it to make it more interactive. I think this is a nice example of what can happen with openly licensed articles.

25 Years of Ed Tech talk – I gave a talk based on my 25 Years series at a recent OpenTEL day at the OU. It was my first go at putting this into a presentation, so a bit rough around the edges. The video covers the whole day, so you can skip mine and go onto more interesting ones.

Gettin’ Air Podcast – Terry Greene hosts a great radio/podcast series. The one he did with me was a lot of fun, he surprised me by calling up old blog posts and we riffed off these, which made a nice change from the usual project promotion stuff.

Aspects of open – fieldcasting

Image of team in a field, being filmed

As part of the OU’s 50th anniversary next year I’m going to be giving a lecture on aspects of open education. As well as the obvious, I think there are means of opening up education that get overlooked or forgotten in the new interpretations.

An example of such is virtual, or remote field trips and laboratory work. At the OU we have an assortment of such approaches, including the OpenStem labs, a virtual microscope, and fieldcasts. This latter activity is the focus of this post. We have always conducted residential schools, but these are increasingly costly and difficult for students to attend, and there can be issues around accessibility. So the team of Philip Wheeler, Julia Cooke and Kadmiel Maseyk. As they state, their challenge was “How do we convey the interest, excitement and challenges of finding things out in the field when students are miles away?”.

Virtual field trips is one answer, for example the OU have developed a Virtual Skiddaw app. Another approach is a solution that involved broadcasting live from aa field trip with academics, and splitting this across different segments. The students at home could then devise hypotheses and interact with the academics on site, engaging in real time discovery. Philip puts it like this “We separated the key pedagogic element of fieldwork (essentially learning alongside students in an inherently variable/uncertain environment) from the actual physical act of being outside. Retaining the former was important, the latter less so.”

The technology set-up wasn’t without issues, and it took someone with the expertise of Trevor Collins to pull all the bits together. But the tech is at a stage where we can do this, at high quality (rather than just one person on their iPhone), without needing to call in the BBC and break the bank. That makes it a much more viable option for a range of courses. With about 80 students in each session, the feedback was almost universally positive, students found it interesting, fun, valuable to their studies and felt like they were genuinely involved.

As well as being a neat, practical, beneficial application of technology, the reason I like this approach is that it reminds us that ‘open education’ is not always about licences. These field trips aren’t ‘open’ in the sense that anyone could join them (they are available to registered students on the course). But they are definitely about opening up the experience of field trips, and more significantly the engagement with scientific process. I think sometimes these very real and powerful applications of what openness in education means get overlooked in the more content focused, silicon valley flavoured interpretations.

Asking the right questions

I mentioned previously that I’m hosting a series of seminars at the Open University, with the intention of showcasing our own expertise (internally and externally) and also getting us an institution to engage with current ed tech developments.

Last Thursday my colleague Prof Bart Rienties looked at analytics. Not so much real time, but rather digging into large data sets across multiple courses to explore what it is that students actually do. This is always difficult to ascertain, but particularly so for a distance education university when you don’t get to see what they’re doing. We ask them of course through surveys, which are useful but these aren’t always reliable – people often forget what they actually did, over or under-estimate times spent on certain activities and sometimes give answers they think you want to hear. There are problems with analysing data sets too as you are sometimes inferring what the data means in terms of student behaviour (eg is spending a long time in the VLE a sign of engagement or that they’re struggling to understand?), but combined with other tools it provides some fascinating insights.

Bart framed his talk around ‘6 myths‘ we have regarding OU student behaviour:

    1. OU students love to work together
    2. Student satisfaction is positively related to success
    3. Student performance improves over time (eg from first to third level courses)
    4. The grades student achieve is mostly related to what they do
    5. Student engagement in the VLE is mostly determined by the student
    6. Most OU students follow the schedule when studying

The answer to all of these is “No” (or at least “Not that much”). You can see the talk below to see Bart expand on them, and it’s not my place to expand on his work. But I’ve been doing a lot of thinking about these since the talk. Initially they might be deemed a concern, but perhaps they are also highlighting positive behaviour. For example student behaviour is largely determined by what we set out in the course (no 5). That seems like an effective outcome of good learning design, particularly for distance education students. Similarly, while students don’t slavishly follow the course schedule (no 6), with many studying ahead or just behind the calendar, this can be framed as part of the flexible (and accessible) design. Some online courses from other providers are very strict – you have to do task X on Monday, contribute to discussion on Wednesday, complete the quiz on Friday, etc. For our students such a regimented approach would not work, many students know they will be unavailable at certain times for instance and so study ahead.

And so on for each of these. What this highlights for me is that the work Bart and his colleagues do at the OU is essential in getting us to ask the right questions. We are all guilty of making assumptions about student behaviour I think. The kind of analysis undertaken here is not seeking to automate any process or remove people from the system (often the fear of an analytical approach), but rather encourages us to dig deeper into what is happening in the very real lives of our students. And from this we can design better courses, support mechanisms, assessment, etc.

Here is Bart’s talk:

Live like common people

Everybody's doing it

You can file this under “I know you know this already, but let me rant”.

“Normal people”, “the real world”, “the university of life”, “ivory towers” – I have a list of cliches people use when talking about academia and higher education, which cause a mental (if not actual) eye-roll and immediately invalidate the point being made. See the comments section of any article about universities in the Guardian or Daily Mail, and it is only a matter of time before one of these pops up, giving the poster the basis to dismiss any claim. They used to just irritate me, but I’ve come to see them as not just indicative of laziness, but rather more sinister.

They are wrong-headed in a number of ways. Firstly, the suggestion that all people who work in higher education don’t lead ‘normal’ lives. That they don’t worry about money (because academics are phenomenally well paid as we all know), job security (precarity is so not a thing in universities), families (we reproduce in laboratories), health (your h-index conveys immunity), etc. This is of course, laughable, (hence we have hashtags such as #professorslooklikethis) but more concerning is that it is a means of ‘othering’ academics. We are not ordinary people, therefore we don’t need to be treated like people at all.

Secondly, what constitutes ‘ordinary’, ‘real life’, etc? My life is characterised by its fabulous ordinariness – I watch sports, go to the cinema, walk my dog, live in a bland modern house and eat more pie than I should. But even if that were not so – if I were a fifth generation Oxford Don, who spent their weekends leading a medieval dance troupe and published only in Latin, that’s still a valid, and in the world I would inhabit, a real, existence. It is not as if we live in the industrial age where the ‘real world’ might be working in mines versus the aristocratic lifestyle. Is making a film, programming software, social media marketing, or being retired living in the real world? We live in a highly diversified working environment, and one suspects there are now more people not doing ‘real work’ (however that is defined) than there are doing it.

Thirdly, aren’t we all supposed to be extraordinary these days? That’s what all those lifestyle ads tell me. Many things we now regard as part of the social norm were deemed weird, and definitely unordinary in their day. Most impressionist paintings, now seen as part of the mainstream canon of art history (and even a bit bland) were seen as truly shocking in their day. To restrict approval and action to only those activities enclosed within the current definition of normal, constrains development for all of us. I grew up in Thatcher’s 80s – it was horrible, the social hegemony was stifling. I love that so many different types of lifestyle, that diversity across a range of aspects (sexuality, race, family structure, work, hobbies) in any road you go down is now the norm.

Lastly, to say that we are only concerned with what has practical utility at this moment is to close yourself off to a lot of human experience. Not everything will have direct application now, and quite a lot might never have direct use, but as soon as you start applying that razor to society, there would be much lost that we currently cherish.

When someone uses these terms then it’s worth asking: “who else do they exclude in their definition of the real world?” & “what are they hoping to control and shut down by dismissing these people?” The answers to these questions will demonstrate why the use of such terms is not just reaching for the nearest cliche, but indicative of a more sinister mindset. In short, breaking news – this is real life.

Bruno’s Last Walk

In the autobiography I will never write, a chapter will surely be devoted to the dogs of my life. From the Jack Russell who used to push me around in a trolley when I was a baby, to the cross breed rescue dog my wife and I got as an obvious, but unbeknown to us, trial run for having a child of our own. Amongst these, standing as the most loyal and singularly devoted, will be Bruno, the Staffordshire Bull Terrier I adopted from Cardiff Dogs Home 3 years ago.

As some of you may know, this was just after my divorce. I wasn’t sure I could manage a dog on my own, but the inevitable pain of marriage breakup had been exacerbated by losing a previous dog to cancer and sharing custody of Pip (who peeks out in the banner of this blog). So, I opted for an older dog that might not need as much attention as a pup. Bruno was 11 years old, and had been in the home for over 6 months. He was a bit deaf, and being a Staffie, not a very popular breed for people with children. They didn’t know anything about his past, but there was some signs it hadn’t been a good one. A couple had adopted him previously only to be approached by two menacing blokes who insisted he was their dog and they should return him immediately. Feeling intimidated they took him back to the dogs home, only for the declared owners to leave him there unclaimed. So he languished in the kennels for months. An old fella who found himself unwanted, in my self pitying state it felt like a kindred spirit. And indeed he was, we enjoyed doing the same things – eating, napping on the sofa, snoring, going to the pub.

From the day I brought him home, he never wanted anything else other than to be by my side. I would work and he was there by my feet, I watched TV and he curled tightly in next to me, in the morning I was awoken by him bashing the door open and jumping on the bed. I had never considered owning a Staffie before, put off by their reputation as the drug dealer’s dog of choice. But like most Staffies he loved people, wasn’t too bothered about other dogs and was an ideal pet. I made a deal with him on the day I adopted him, that I was at home most of the time but he would have occasional longish days on his own (or with a dog walker) and he would go into kennels when I went away. It may not be perfect, but it was better than being in the dog’s home. He assented to the terms readily, and was never a problem (apart from that time he ate a tub of Quality Street, but we can overlook that as it was Christmas). My dog walker regularly left me notes about how much she adored him.

Last year he suffered two strokes, when he lost the use of his back legs, and his remaining sight. He recovered his ability to walk (if prone to the occasional collapse), but was now completely blind and deaf. But, as long as he had a slow walk every day, and could find me in the house, he was happy enough. Unfortunately his condition deteriorated, with a form of dementia setting in, occasional vomiting, passing blood. He became increasingly anxious without me, even if my daughter was with him, would become lost in the house, wake up in the night in distress and forgot his toilet training. I spent the last 6 months working downstairs on the sofa so he could be next to me and settle down. At ALT-C I got a call from the dog sitter that he was very distressed and I ended up having to commute to Manchester daily so I could be with him at night. Any one of these complaints in isolation was manageable, but ultimately it was the burden of their relentless accumulation that sapped his spirit.

So, at the end of last week I made the decision to have him put to sleep today. A definite date is a terrible knowledge to possess, ticking off his last night on the sofa, his last meal, like a macabre advent calendar. And today I took him on his last walk and said goodbye. It was time I think, as Howard Jacobson says in his piece The dog’s last walk, he had the sense of “something once strong ebbing from him, the dignity of his defeatedness.” It was an exclusive relationship – he was the first dog that was truly mine alone, and I was the only human he was remotely interested in. He fulfilled an important role for me, and helped me transition to a new, contented phase in my life. In return, I like to think I gave him three years at the end which removed the stain of mistreatment and rejection that had preceded it. I hope that was a fair exchange.

The mathematics of cruelty

Abacus

Warning: this is a naive politics post, best avoided by those who really know their stuff. I expect someone has written all of this much better than I have and it’s been dismissed already. But, hey, it’s my blog.

A while ago I read Laurence Rees’s comprehensive The Holocaust: A New History. It is not a cliche or an invocation of Godwin’s Law any longer to say that the direct parallels to today’s climate in the US, UK, much of Europe, Australia, Russia and elsewhere are glaringly obvious. Particularly in the rise of nazism and their route to power within a democratic framework. We always used to ask that question “how could that happen in Germany?”, but what we’ve seen, especially with Trump and Brexit is that it is the alignment of different segments in society. The fundamental driver for these right wing resurgences seems to be a deliberate, performative cruelty. A pretty reliable compass for the Trump White House or Brexit negotiations is “which is the cruellest option?” To help me get my head around where we are (and to offer myself some hope for the future), I’ve begun to think of it in terms of portions of cruel behaviour that need to align, and from this, what we need to do to prevent it happening.

Let’s break down the mathematics of cruelty then. The exact proportions of each group may vary, and there will be some overlap. It is overly simplistic view but has the merit of not being tied to one context so is applicable in different settings. As a rough guide, I think we have:

  • The naturally cruel – these people just enjoy being cruel, it’s their main driver in life. Think Katie Hopkins. Let’s say this is about 10% of the population usually. Most people aren’t cruel most of the time. Usually this group is dispersed, and may not even bother to vote, since feeling rejected by the political system is part of their identity. Maybe one day we’ll find a way to reach this group and make them nicer, but generally they are not our target.
  • The latently cruel – these are people who aren’t always vicious, but it can be brought to the surface readily. It can be couched in terms of they’re the victims, so need to be cruel to survive, or playing to fears of culture, immigration, safety. Once aroused they become indistinguishable from the first group, but often other interests will have priority over cruelty. This could be 10-15% of the population. You’ve probably got a family member like this.
  • The cruel for self interest – this group are primarily worried about their own jobs, welfare, housing, healthcare, etc. They can be tricked into thinking cruelty is the best (only even) route to safeguarding this. You just need to be cruel this once and it’ll all be fine. Good centrist politicians are very effective at appealing to this group and steering them away from overt cruelty through playing to discernible benefits. Again, let’s assume 10-15% of the population.
  • The intellectually cruel – these are the people who vote as a form or protest. They aren’t really cruel, but they are more interested in making an intellectual point (eg those on the left who voted for Brexit as an anti-neo liberal protest), which allows them to ignore the more blatant cruelty on display. They probably don’t expect to win either, but want a good debating point afterwards. Let’s say 5% in this group.

If you get all of these to align around a single issue or person, then you’re up around 40-50% of the vote, enough to form a government or win a referendum when you get people who don’t vote, or a divided opposition. It is a rare set of circumstances that brings these together, which is why the past few years have come as a surprise. But it can happen. Boy, can it happen. If we accept this broad classification then there are a number of conclusions we can draw from it.

Firstly, don’t trick ourselves that it can’t happen here, that we are beyond this now. The recent success of the far right can in part be attributed to complacency and smugness. We thought that wasn’t who we were now, and allowed these factions to coalesce by not being vigilant enough.

Secondly, it means you don’t have to convert half the population to your side. You simply have to find ways that prevent a lightning rod forming that attracts all these groups to one focus. You focus on converting those middle two groups primarily to a more reasonable, less cruel course of action.

Thirdly, structures and policy in society and political systems can be implemented to prevent their alignment. This is going on the assumption that a political system based primarily on cruelty is a bad thing. From and ethical and humanitarian perspective that is obviously the case, but even from a more mercenary perspective it is true – such a culture forces people into extremes, and a fundamentally fractured country or society does not function well in the long term.

So, what can we do about it? I think there are two courses of action that follow from this. The first is to prevent the normalisation of cruelty. This particularly influences the latently cruel group. You don’t invite them on TV and if you do, you challenge them so utterly that they look foolish. The BBC’s obsession with Farage because it was scared of looking like it didn’t understand the working class and its abject failure to hold Brexiteers to account is a prime example of how not to do this. Similarly, without Fox News, Trump doesn’t exist. Our media culture needs to be more accountable, and responsible.

The second route is around political structures. The system should be constructed in such a way as to prevent the alignment of these factions. For example, the Brexit referendum was a clear focal point, but if you were going to have it, then it should have been around a clear Brexit proposal. For a start, as we’ve seen, the brexiteers cannot create one, but had they done so, then it focuses the argument on specifics. The self interest group will then be moved away from a vague promise of better things to very specific and factual arguments. Similarly, the biggest flaw with the US system is that Trump could become the Republican nominee in the first place. For the most important job in the world, some experience should be necessary, and any junior political role would have revealed his inadequacies.

These are boring, policy, governance and selection points but through these you construct a system that prevents the mathematics of cruelty adding up. If we survive the next few years, this needs to be our focus going forward.

Lecture capture – don’t fight it, feel it

Lecture capture would be a strange choice of hill for me to die on since I work at a university that doesn’t do lectures, and have no experience of it. But Sheila MacNeill started a twitter conversation about it, and I think it captures some broader ed tech issues, so here I am, weighing in with my ill-formed opinions.

  • Does it benefit students? – students seem to like lecture capture. The evidence on whether it impacts on attendance is mixed, but it is useful for revision, for those who struggle to take notes for whatever reason, and to go over complicated topics. So in general, the basis is that learners who are in a lecture based environment, like it as an additional service. This should be the main factor.
  • Understand the relationship with employment – Melissa Highton has talked about how recent political issues have brought ed tech like lecture capture to the fore. Unions have a role to play here in not resisting its implementation, but in ensuring that when implemented it is not used to undermine strike action or impact upon employment.
  • It reinforces lectures – yes, possibly and I’ve seen this as an argument against it. But unless your institution is implementing a direct strategy to move away from lectures, then lecture capture is the least of the factors. Unless estates are converting lecture theatres to different types of learning spaces and there is extensive staff development under way to move away from it, then assume lectures are around for a while yet, and lecture capture makes no difference either way.
  • It’s not as good as bespoke content – producing specific online learning material is probably better, but at scale this becomes difficult. Some staff will do it, but it is a considerable extra burden, which would require significant resource to realise across an institution. Also, there may be some value in the lecture having an experiential element – students will recall that the lecturer moved over here when they said this, and this was the point where they were distracted in the actual lecture, etc.
  • Use it as a stepping stone – like the VLE it is likely that lecture capture will end up being an end point in itself, rather than a step on the path to more interesting practice. With this in mind, ed tech people should work hard at the start to make it part of a bigger programme, for instance running development alongside for the type of bespoke content mentioned above, or thinking about flipped learning, or making the lecture a collaborative resource, etc.
  • It’s boring – we should be doing more exciting things in education! This is true, but it’s not an either/or, lecture capture is useful to students here and now. Boring but useful is ok too.
  • Evaluation is key – people have lots of views about lecture capture, often based on beliefs or anecdote. So it’s important to evaluate it in your context, particularly for questions such as “does it impact attendance?”, “do students use it?”, “how do students use it?”, “do students who use it perform better?”, etc.
  • It’s not a replacement – some of the objections are that it is a misuse of technology, that if you are producing online learning content then pointing a camera at it is like filming a play to produce a movie. But this is to misunderstand how students use it I think. For them it is an additional resource which complements the lecture, they may miss the occasional one knowing they can catch up, but in general they still value lectures. It’s like students who record the audio of a lecture, they now have a back up.
  • We like it – after being at ALT-C last week, lots of people commented how much they appreciated the keynotes and other talks being live streamed. We didn’t say that live streaming prevented a move away from the conventional keynote, or that it reduced attendance, or undermined labour. So, if we value it, why shouldn’t students?

I appreciate that it’s a complex issue, and no technology should be seen as inevitable, but there is a certain logic to lecture capture. As we found in the OOFAT report, technologies that link closely to university core functions tend to get adopted. Lecture capture is about as close as you can get. In this case educational technologists can help it be implemented in a more sustainable, interesting and sensitive manner. So, in the words of Primal Scream, don’t fight it, feel it.

[UPDATE: Here’s a few references people pointed me to on Twitter, giving a mixed picture of effectiviness:

Witton, G. (2017), The value of capture: Taking an alternative approach to using lecture capture technologies for increased impact on student learning and engagement. British Journal of Educational Technology, 48: 1010–1019. doi: 10.1111/bjet.12470

Edwards, M.R. & Clinton, M.E. A study exploring the impact of lecture capture availability and lecture capture usage on student attendance and attainment. High Educ (2018). https://doi.org/10.1007/s10734-018-0275-9

Nordmann, E., & Mcgeorge, P. (2018, May 1). Lecture capture in higher education: time to learn from the learners. https://doi.org/10.31234/osf.io/ux29v]

25 Years of Ed Tech: Themes & Conclusions

25

Now that I have completed the 25 Years of Ed Tech series (which was actually 26 years, because maths), I thought I’d have an attempt at some synthesis of it and try to extract some themes. In truth, each of these probably merits a post of its own, but I wanted to wrap this series up before the 25 Year anniversary of ALT-C next week. Plus, tired.

No country for rapidity – one of the complaints, particularly from outsiders is that higher ed is resistant, and slow, to change. This is true, but we should also frame it as a strength. Universities have been around longer than Google after all, and part of their appeal is their immutability. This means they don’t abandon everything for the latest technology (see later for what tech tends to get adopted). If you’re planning on being around for another 1000 years then you need to be cautious. We didn’t close all our libraries and replace them with LaserDiscs in the 90s. As the conclusion of the Educause piece I wrote stated “it’s no game for the impatient”.

Historical amnesia – I’ve covered this before, but one of the characteristics of ed tech is that people wander into it from other disciplines. Often they wouldn’t even know they’re now in ed tech, they’re doing MOOCs, or messing about with assessment on their psychology course, and they may spend a bit of time doing it and return to their main focus. Ed Tech can be like a holiday resort, people passing through from many destinations, with only a few regulars remaining. What this means is there is a tendency we see repeatedly over the 25 years for ideas to be rediscovered. A consequence of this is that it sees every development as operating in isolation instead of building on the theoretical, financial, administrative and research of previous work. For example, you probably don’t get OER without open source, and you don’t get MOOCs without OER, and so on.

Cycles of interest – there are some ideas that keep recurring in ed tech: the intelligent tutor, personalised learning, the end of universities. Audrey Watters refers to zombie ideas, which just won’t die. Partly this is a result of the aforementioned historical amnesia, and partly it is a result of techno-optimism (“This time it really will work”). It is also a consequence of over enthusiastic initial claims, which the technology takes 10 years or so to catch up with. So while, intelligent tutoring systems were woefully inadequate for the claims in the 90s, some of that is justifiable in 2018. Also, just conceptually you sometimes need a few cycles at an idea to get it accepted.

Disruption isn’t for education – given it’s dominance in much of ed tech discourse, what the previous trends highlight is that disruption is simply not a very good theory to apply to the education sector. One of the main attractions of higher ed is its longevity, and disruption theory seeks to destroy a sector. Given that it has failed to do this to higher ed, despite numerous claims that this is the death of universities, would suggest that it won’t happen soon. Disruption also plays strongly to the benefits of historical amnesia, which is a weakness here, and the cycles of interest argue that what you want to do is build iteratively, rather than sweep away and start anew. There are lots of other reasons to distrust the idea of disruption, but in higher ed at least, it’s just not a very productive way to innovate.

The role of humans – ed tech seems to come in two guises: helping the educator or replacing them. If we look at developments such as wikis, OER, CMC, blogs, even SecondLife, then their primary aim is to find tech that can help enhance education, either for a new set of learners, to realise new approaches, or sometimes, just try some stuff out. Other approaches are framed in terms of removing human educators: AI, learning analytics, MOOCs. Not necessarily – for example, learning analytics can be used to help human educators better support learners. But often the hype (and financial interest) is around the large scale implementation of automatic learning. As I mentioned in a previous post education is fundamentally a human enterprise, and my sense is we (at least those of us in ed tech in higher ed) should prioritise the former types of ed tech.

Innovation happens – for all the above: change happens slowly, people forget the past, disruption is a bust, focus on people – the survey of the last 25 years in ed tech also reveals a rich history of innovation. Web 2.0, bulletin board systems, PLEs, connectivism – these all saw exciting innovation and also questioning what education is for and how best to realise it.

Distance from the core – the technologies that get adopted and embedded into higher ed tend to correlate closely with core university functions, which we categorised as content, delivery and recognition in our recent OOFAT report. So, VLEs, eportfolios, elearning – these kinds of technology relate very closely to these core functions. The further you get from these then the more difficult it becomes to make the technology relevant, and embedded in everyday practice.

So, that’s really the end of the series.

25 Years of EdTech: 2018 – Critical Ed Tech

Farväl - Goodbye

[The last in the 25 Years of Ed Tech series]

I’ll do a conclusion and themes post (if I can be arsed) of the 25 Years series, but now we reach the end. For this year, I’ve chosen not a technology, but rather a trend. I see in much of ed tech a divide, particularly at conferences. There is the gung ho, Silicon Valley, technology utopian evangelists. This is a critical reflection free zone, because the whole basis of this industry is on selling perfect solutions (to problems they have concocted). This is where the money is.

In contrast to this is a developing strand of criticality around the role of technology in society and in education in particular. With the impact of social media on politics, Russian bots, (actual) fake news, Cambridge Analytica, and numerous privacy scares, the need for a critical approach is apparent. Being sceptical about tech is no longer a specialist interest. Chris Gilliard has an excellent thread on all the invasive uses of tech, not all educational, but there are educational implementations for most of them.

One prominent strand of such criticality is suspicion about the claims of educational technology in general, and the role of software companies in particular. One of the consequences of ed tech entering the mainstream of education is that it becomes increasingly attractive to companies who wish to enter the education market. Much of the narrative around ed tech is associated with change, which quickly becomes co-opted into broader agendas around commercialisation, commodification and massification of education.

For instance, the Avalanche report argued that systemic change is inevitable because “elements of the traditional university are threatened by the coming avalanche. In Clayton Christensen’s terms, universities are ripe for disruption”. In this view, education, perceived as slow, resistant to change and old-fashioned is seen as ripe for disruption. Increasingly then academic ed tech is reacting against these claims about the role of technology and is questioning the impact on learners, scholarly practice, and its implications. For example, while learning analytics have gained a good deal of positive coverage regarding their ability to aid learners and educators, others have questioned their role in learner agency and monitoring and their ethics.

One of the key voices in ed tech criticality is Neil Selwyn, who argues that engaging with digital impact on education in a critical manner is a key role of educators, stating “the notion of a contemporary educational landscape infused with digital data raises the need for detailed inquiry and critique”. This includes being self-critical, and analysing the assumptions and progress in movements within ed tech. It’s important to distinguish critique as Selwyn sees it, and just being anti-technology. These are not pro and anti technology camps – you can still be enthusiastic about the application of technology in particular contexts. What it does mean is being aware of the broader implications, questioning claims, and advocating (or conducting) research about real impacts.

Ed tech research then has begun to witness a shift from advocacy, which tended to promote the use of new technologies, to a more critical perspective. This is not to say that there is enough critical thought around, and the drive for venture capital still seeks to eradicate it, but this series is about when moments in ed tech became significant. 2018 marks that for a more receptive approach to critical perspectives. If the evangelist and critical approaches represent two distinct groups, then sitting inbetween these two is the group we should all be focused on in ed tech – the practitioners in universities, schools and colleges who want to do the best for their learners. And that seems a fitting place to end the series.

css.php