25 Years of EdTech – 1994: Bulletin Board Systems

Continuing my 25 years of Ed Tech reflections, it’s now 1994. The web is just about to break in a big way, and the internet is gaining more interest. One of the technologies that old ed tech hacks like me go all misty eyed over is the Bulletin Board System. These were popular for the nascent discussion forums online, and mark the first real awareness of education to the possibility of the internet. They often required specialist software at this stage, were text based and because we were all using expensive dial-up, the ability to synch offline was important.

At the OU (I was yet to join) they were experimenting with a couple of systems. While people such as Robin Mason could see their potential, they were still viewed as a very niche application. At the time the university needed to help people with the whole getting online process, dealing with unfamiliar software and advice on how to communicate online. That is a lot of academic real estate to use up in a course about, Shakespeare, say. So their application was reserved for subjects where the medium was the message. For distance education though the possibilities were revolutionary – they had the potential to remove the distance element. The only way students communicated with each other previously was at summer school and face to face tutorials. If we want to talk about the OU becoming a university of the cloud, then this is where it started.

The lessons from BBS are that some technologies have very specific applications, some die out, and others morph to a universal application. BBS did the latter, but in 1994, most people thought they would be in one of the first two categories. What was required for them to become a mainstream part of the educational technology landscape was the technical and social infrastructure that removed the high technical barrier to implementation. More of that in later posts.

[UPDATE – Will Woods reminded me that the early OU BBS was called CoSy]

The Digital Scholar – ebook file

I’ve been doing some writing on revisiting my 2011 book The Digital Scholar. I’ve also got a couple of presentations planned around it. But on checking I note that the imprint of Bloomsbury that published it, Bloomsbury Academic, is no longer functioning and the titles have been rolled into the main Bloomsbury catalogue. My previous links to the free version don’t work any more, and you have to dig pretty hard to find the free version on their site. I think open access publishing was something they experimented with when Frances Pinter was there, but now she has moved on to Knowledge Unlatched, they’ve quietly abandoned it.

Of course, the benefit of open access is that the destiny of my book is in my own hands, and needn’t die when a publisher changes tack. I own it. It’s strange that this is not the norm, I know. So, this post is really just a means of archiving my own book (on my own domain) for future reference. And of course, a reminder to read it if you haven’t done so.

Here it is then (only PDF & epub I’m afraid):
The Digital Scholar PDF

Digital Scholar Epub

25 years of edtech – 1993: Artificial Intelligence

Cyborg

This year marks the 25th anniversary of ALT. I’m co-chairing the ALT-C conference with Sheila MacNeill, which celebrates this in September. This got me thinking about the changes I’d seen in that time, and so I’m going to attempt a series of blog posts that use this as a vehicle to explore the developments in ed tech over the past 25 years. It may end up like Sufjan Stevens project to write an album for every state, and I won’t get past two or three, but let’s give it a go. Also, in order to fit it in, there may be some twisting to fit a tech into a year, and it’s not necessarily the year the technology was invented but rather when I came to recognise it. So, with those caveats, let’s set off. It’s 1993, I’m a PhD student in Middlesbrough, it’s just before Nirvana and Oasis break, the Stone Roses and Madchester have peaked… (screen goes wavy)

I’m starting with Artificial Intelligence. This is partly because in 1993 I was studying a PhD in AI applied to aluminium die casting (I know you want to read my thesis). But it’s also partly to demonstrate the cyclical nature of ed tech. In 1993 AI was going through its second flush of popularity, following on from initial enthusiasm in the eighties. The focus was largely on two approaches: expert systems and neural networks. These were contrasting approaches: expert systems tried to explicitly capture expertise in the form of rules, whereas neural networks learnt from inputs in a manner analogous to the brain. The initial enthusiasm for Intelligent Tutoring Systems had waned somewhat by 93. This was mainly because they really only worked for very limited, tightly specified domains. You needed to predict the types of errors people would make in order to provide advice on how to rectify it. And in many subjects (the humanities in particular), in turns out people are very creative in the errors they make, and more significantly, what constitutes the right answer is less well defined.

Expert systems though were pushed as teaching aids also – if you captured the knowledge of an expert, in say, medical diagnosis, then this forms a useful teaching aid. My experience in developing an expert system to diagnose problems in aluminium die casting is probably symptomatic of the field: it sort of did the job, but didn’t really catch on. The problem was twofold: the much quoted ‘knowledge elicitation bottleneck’ and the complexity of real world. The first meant getting the knowledge from experts in a format you can use. Apparently you can’t just drill a hole in their heads and tap it out like siphoning petrol from a car. Experts don’t always agree, and making expertise explicit is notoriously difficult. What characterises an expert is that they ‘just know’. The complexity issue means you can’t predict the way things work out. For example, we characterised typical flaws (and provided a very nice database of images). But sometimes these co-occur, sometimes they look different, sometimes the causes can be multiple.

AI faded after this for a while, only to resurface with a vengeance in the past five years or so. I may revisit it later, so I won’t say much about the current instantiation. What is interesting I think is that the claims are much the same (although they often think they have invented them for the first time), and some of the problems remain. However, what has really changed is the power of computation. This helps address some of the complexity because multiple possibilities and probabilities can be accommodated. In this we see a recurring theme in ed tech: nothing changes while simultaneously everything changes. AI has definitely improved since 93, but equally some of the fundamental issues that beleaguered it still remain.

Edtech & Symbols of Permanence

Castell Coch 2

I understand why tech companies like education, but I don’t understand why they like it so much. Obviously, there’s money, the global education market is estimated at $4.4 trillion. Get a big chunk of that market and you can buy a football team. And there’s the perception that it’s slow and ripe for change, which appeals to both investors and egos of developers. These are both undoubtedly significant factors. But I’ve come to suspect there’s something else in the psychological mix – a form of legitimacy and permanence. I’m going to try to explain this by way of a long winded detour into the history of my local castle. But it comes together, so bear with me.

Castell Coch – a brief history

Castell Coch (Welsh for Red Castle) is situated above the village of Tongwynlais, on the outskirts of Cardiff. The ruins of an earlier 11th century castle and the surrounding land were acquired in 1760, by John Stuart, 3rd Earl of Bute. His great-grandson, John Crichton-Stuart, the 3rd Marquis of Bute, inherited the castle in 1848. The landed estates, and particularly ownership of the Cardiff docks which had become the busiest coal exporting dock in the world, made him one of the wealthiest men in the world. A keen medievalist, he employed the architect of High Victorian style, William Burges to reconstruct the castle, as a summer hunting retreat.

In collaboration with the Marquis, Burges developed a design in the style of medieval, fairy tale castles. The exterior was constructed from 1875 to 1879. Despite its intended aim as a hunting lodge, the castle was not used often, and is largely viewed as part of the Victorian fashion for follies.

This can be considered as a belated example of what Peter Borsay termed the The English Urban Renaissance. After 1700 Borsay argues that many English towns underwent a renaissance period, characterised by uniform design, street planning, a growing middle-class population and increased leisure facilities such as assembly halls, public gardens and theatres.

A number of conditions then arose to see a shift from towns being less focused on their rural position, and instead on their own services. Borsay provides the role of leisure as an example of such a shift in identity and function. The urban renaissance was largely unseen in Wales however, which lacked major industry prior to the nineteenth century. Towns such as Brecon acted as agricultural market towns. The geography made transport difficult between many Welsh settlements, which further limited their trade.

However, the features Borsay sets out as being characteristic of a 17th Century urban renaissance can be seen in nineteenth century Cardiff, accompanied by population growth. Allied with this population growth are many of the public amenities Borsay cites as characteristic of an urban renaissance, for instance a Gas Act in 1837 for public lighting, a waterworks act in 1850, as well as signs of leisure such as a racecourse in 1855. This is contrasted with the experience of the poor in Cardiff, which after the Poor Law of 1834, developed a workhouse in 1836. This soon proved inadequate for the expanding population, and a new workhouse was constructed in 1881.

Castell Coch 4

Castell Coch as representation of power

This provides a context within which Castell Coch was constructed, and how it could be interpreted by the local population. This was a time of great social upheaval – the trade union movement was a significant force in South Wales and the Rebecca Riots of 1839-1844 in West Wales had demonstrated that social unrest could flare up violently. The political activism of the Chartists in the South Wales coal fields similarly highlighted that the feudal order was in decline. These social upheavals caused great anxiety amongst the elite, with the Railway merchants proclaiming that ‘the late Chartist and Rebecca riots sufficiently evince that Wales will become in as bad a state as Ireland, unless the means of improvement are given to it’.

In this context then the Castell becomes not simply an indulgence of an interest in medievalism, but a deliberate attempt to lay claim to the historical immutability of the position of the aristocracy. This is further reinforced by the siting of Castell Coch on an existing ruin. The original site dates back to the Normans, and was rebuilt in 1277 to control the Welsh. As Wales faced another rebellion the reconstruction of Castell Coch can be interpreted as a signal on the longevity of power. The decision by Burges to incorporate elements of the earlier castle, particularly noticeable in the cellar reinforces this connection with past representations of power.

Although the Marquis could point to several generations of wealth, they were not part of the landed gentry dating back to Napoleonic times. In South Wales, Philip Jenkins argues that there was a shift in the gentry from ancient landed families to a new landed elite from approximately 1760. These new families sough to establish an ‘ancient gentry’:

For the new ruling class, newness was politically damaging, while antiquity could be a considerable asset. If they could only assert their historical roots they could claim to be part of a natural and immemorial rural order.

In this context, the faux romantic style can be interpreted as an extension of power. By evoking romantic notions of medieval ages, and building on the site of a Norman century castle, the message of Castell is one of the permanence of power. The immutability of the aristocracy is presented as both reassuring and unquestionable. Tom Williamson highlights this use of consciously manipulating ‘symbols of the past’, in order to hide a very modern use of land ownership rights. For example Susie West highlights how landscape landscapes gardens are ‘spaces deliberately removed from production’ and are now presented as aesthetic objects. Castell Coch can similarly be viewed as an artistic creation, removed from the original function, in this case military defence, of the original.

Castle Coch

The ed tech equivalent

If we view the digital revolution as a similar social force as the industrial revolution, then it creates challenges to established power. What the new powers then seek to do is ally themselves with symbols of longevity. In the physical world this is castles and manor houses. In the digital world, it is education and governance. Education is often decried for being slow to change, and stuck in the past, but whether they realise it or not, these are the values tech companies seek to appropriate. Education is a recognised universal good (generally). It has longevity, history, social value. Those, as much as the millions of users with dollars, are assets that tech companies seek to acquire, because as with Castell Coch, what they do is strengthen your position. The message of Castell Coch was that physically and literally it was unassailable – which meant that metaphorically so was the position of those who owned it. It rendered legitimacy to their new found wealth, the crucial function of which is to remove questions. This is precisely what being deeply involved in education does for tech firms. We don’t question their algorithms, their ethics, their control because, look, we’re educating 20 million people in developing nations with our platform.

Of course, that doesn’t mean higher ed should eschew technology – far from it, we have a duty to ensure learners get the most from technology and to use it to teach in new ways and reach new audiences. But it shouldn’t sell itself cheap. They want something from education, its ‘symbols of the past’, so stop treating them as saviours.

(And if you want to come and see Castell Coch, give me a shout).

Diving for pearls

Mercury Close-up: Hovnatanian Crater (NASA, MESSENGER, 01/16/12)

For the upcoming REF, the OER Hub are one of the possible impact case studies for the OU. We applied for a small bit of internal funding, and last week all decamped to a cottage in Gloucestershire for five days to put in an intensive writing session. This is not a commentary on the REF, an analysis of the neoliberalisation of education or the dangers of metrics, just some reflections on that writing process (so lower your expectations).

Firstly, a dedicated (isolated) week is definitely the way to go. We had been provided with a set of documents to complete by our excellent REF advisor, Jane Seale. But without a dedicated, prolonged period to devote to these, it would have taken months to complete. Also, the intensity of focusing on only this, rather than fitting in amongst other pressing demands, meant that the quality of what we produced was greatly improved (I think). So, a week away may seem like an indulgence, but was probably more productive and efficient in the long run.

Secondly, impact in higher education research is difficult, and often indirect. The dream type of impact is you do research, it leads to a change in Government policy on health or schooling. But that’s actually quite rare. In the last REF they didn’t allow impact within higher education to count, which is especially problematic for us, as part of our aim has been to work with researchers elsewhere and build OER research capacity. This time they may be a bit more lenient, but impact on other researchers is still frowned upon.

Thirdly, we’re all collaborative and supportive in the OER community (yes we are). Claiming impact sometimes seems like you need to ego and lack of shame of Donald Trump. We were solely responsible for everything that has happened and invented it all! This rather grates with the collegial, sharing network we are part of. So there is a tension in the process between needing to promote yourself and claim impact while also wanting to acknowledge the diverse, distributed nature of influence.

Lastly, we wanted to stress how the process by which we have conducted research, namely making openness (through social media, open access publications, open data, our open researchers pack, open courses, etc) is as impactful as the research itself. I feel that we made a good stab at this, but I wonder how much it will mean to assessors who are from a ‘traditional’ approach.

We’re writing this up now, and have identified lots of bits of evidence and testimonials we need to gather. Which means we may be coming to you for some input soon. I guess if I was to offer any advice, it would be to definitely try and carve out a dedicated chunk of time, to clearly work through some distinct messages you want to convey and then match these with evidence. You may need to then go through several iterations of this to find the best match of evidence to message.

Alone in Blogistan

Berlin under snow

One of the books I read last year was a happy confluence of factors. The book was Hans Fallada’s war time tale of quiet German resistance, Alone in Berlin (aka Every Man Dies Alone). It related to the political situation and rise of the right (the frothing demand of British newspapers to crush opposition to Brexit was straight from this era), a trip my daughter and I took to Berlin, and my academic interest in online communication. And it is this last element that I’ve been pondering on and off since reading it last October.

For those who don’t know the story, it follows the tale of a nondescript Berlin couple, the Quangels, whose only son dies early in the war. As an act of resistance they start writing postcards with anti-Hitler messages such as “The Führer has murdered my son. Mother! The Führer will murder your sons too, he will not stop till he has brought sorrow to every home in the world”. These are deposited in different places for strangers to find. This is their small scale stand, an “irrevocable declaration of war, and also what that meant: war between, on the one side, the two of them, poor, small, insignificant workers who could be extinguished for just a word or two, and on the other, the Führer, the Party, the whole apparatus in all its power and glory, with three-fourths or even four-fifths of the German people behind it.”

It is a largely gloomy, hopeless book since most of the messages are handed in to the authorities immediately, through fear of being caught in possession of them. And anyone associated with the Quangels ends up detained and executed. Their campaign has no impact and causes misery. But yet the act of resistance, of communication is itself worthwhile.

It is this central theme of what it means to communicate that resonates now. I don’t pretend to make comparison between the Quangels (based on the real life couple the Hampels) and tweeting your anger at the latest Trump nonsense, but rather to highlight the universality of their need to say something, to counter oppressive narratives. That’s why people still try to conduct rational debate online, to present facts, to write blogs and tweets that highlight hypocrisy in the alternative facts and noise deliberately created by those who seek to undermine our sense of reality and justice. Like in Fallada’s novel this nearly always seems futile, occasionally dangerous and largely irrelevant in the bigger scheme. But it’s the act of continuing to communicate, even in to a void, that keeps us human. As Fallada puts it, “we all acted alone, we were caught alone, and every one of us will have to die alone. But that doesn’t mean that we are alone.”

Happy new year.

2017 blog review

Skynda dig hem...!

This is not an edtech review of the year (why do that, when Audrey does it better than anyone?), but rather a review of my own blogging over the year.

First up, some stats:

  • Number of posts: 50 (including this one)
  • Comments: 202 (including ones from me)
  • Visitors: 231,081
  • Visits: 2,123,507 (mainly bots plus me)

I try to blog on average about once a week, so maintained that pretty well. I don’t have a strict policy on this (eg, blogging every Thursday afternoon or something), but the rough goal does prompt me to blog on occasion when I feel there’s been a gap. And in the way of the unpredictability of such things, it is often these ‘filler’ posts which end up being the most popular. So a loose goal seems to work for me in this regard.

I don’t know how to interpret the stats, I know from my other aborted blogs on films and ice hockey, which get about 20 views a year, that this is a lot of traffic. I suspect though that it’s largely bots (certainly in the visits count) generated by having a certain amount of google juice. Even bot love is a form of love.

In terms of themes that emerged this year in my own writing (which is not necessarily representative of anything broader), I would suggest the following four.

Trying to make sense of it all – after the dumpster fire of 2016, this year lurched further into farce and insanity, so there were a few posts on what does it all mean? And in particular what does it mean for education? I toyed with the unenlightenment theme for a bit, but I don’t feel I ever really got that to a meaningful conclusion. Perhaps it didn’t have the legs in the end. The intersection and role of technology in the dystopia we’re creating was also a concern. This theme demonstrates the use of a blog to work things through for yourself as much as sharing ideas – blogging is frequently the means by which I approach a new topic or issue, as its format helps me develop my own thoughts on something. So, I’m always tremendously grateful for those who help me work this through, and especially for the generosity in interpretation of half formed ideas.

The role of open universities and openness – this theme looked at questions such as: what is the future of single mode open universities? What topics does the term open education cover? What are the ways in which openness is being deployed by universities? This is a recurring theme on this blog, as its pretty much my Venn diagram intersection as an academic of research interests (OER, OEP, etc) and institution (OU veteran). This year I felt however that it was a topic that came up a lot more at conferences. Maybe it’s the after effect of MOOCs.

Study & personal metaphors – I completed my Art History MA this year with the OU, and I like to try and make connections between my study and ed tech, even if its very tenuous. Reflecting on being a student is always useful, and  metaphors allow you to be playful in a way that formal academic writing doesn’t. More people came up to me to chat about the Barnaby post or the privilege post than ever do about my academic papers.

Professional amplification – when you have a reasonably popular blog or twitter account you sometimes get asked to broadcast stuff: events, job adverts, projects, etc. I try not to do too much of this, but this year I have used the blog to try and promote some of my own research or others. The key to this not just being a broadcast which I think would be boring for everyone is to find ways in which they speak to broader trends. Having a ‘voice’ (whatever that means) is a bit of a privilege so using it to boost projects or organisations I’m associated with is something I can offer.

Those four categories don’t cover all the posts, but ‘miscellaneous ramblings concocted while walking the dog’ doesn’t convey the right sense of analysis does it? I would say I hope 2018 is better than 2017, but I don’t see much hope for that. Mike Caulfield’s predictions seem pretty on the nose for me, and that doesn’t look like a fun place. But I’ll keep blogging my way through it, as it’s the only thing I have (well, that and a penchant for naps, but that won’t help anyone).

Oh, and thanks Reclaim Hosting for another great year of service.

The zone of proximal depravity

Despair # 1

In the digital era it’s always a difficulty to discern the difference between behaviours that have always been present, but we just notice them differently, and those that are fundamentally changed by the digital environment. Often it’s a bit of both. One such aspect I’ve been thinking about recently is our exposure as individuals to extreme views. It’s one where I think the digital world has caused a significant shift for all of us.

In the analogue world, it’s been usually pretty easy not to find yourself exposed to extreme views, or targeted by extremists, or caught in the middle of a conversation with fascists, terrorists, conspiracy theorists, and assorted undesirables. This was because you can usually spot them, and also because conversations go through stages, it rarely goes, “hello, pleased to meet you, let me tell you about white supremacy”. I remember once I was travelling to Germany in the 80s via coach. I was on the ferry with a group of construction workers (it was Auf Wiedersehen, Pet time), and I got drinking and chatting with them. One of them was clearly intelligent and amiable and we chatted over a couple of Stellas. And then from a conversation about football, he was telling me about the global Jewish conspiracy, the research he had done, how they controlled everything, etc. Like many conspiracy devotees, he was intelligent and erudite, but completely monomaniacal and twisted. The reason I remember this encounter is because it was a failure of my radar. You get blokes (and it’s usually a bloke) like this in the pub, but you quickly detect it, either in their aggressive tone or the direction of the conversation and move away. But in this case I had been caught out and found myself in the middle of this guy’s derangement before I knew where I was.

I raise this memory because it is now what we all face on a daily basis. Mike Caulfield gives an intriguing and terrifying account of how quickly the algorithm on Pinterest takes you from recipes to full on conspiracy theory and fake news:

It is this way in which algorithms now actively seek to bypass our previous defences against extremism that is new. For example, Facebook’s “pages you may like” algorithm suggest Britain First to me (I mean, wtf?). Or the time my daughter came downstairs visibly shaken because a misogynistic video from Milo had popped up in her timeline and not knowing who he was, she had watched it. Or when a friend of a friend on FB decided to dump an offensive meme in the middle of a conversation. You’ll ALL have similar examples, and they happen every day.

To borrow slightly tongue in cheek from Vygostky, we can think of this as a collapse in our zone of proximal depravity. Before you get to the real zealots and extremists, you had to go through a layer of protection and increasing signals. You rarely got embedded in it by accident:

But what the algorithmic feed does is effectively collapse this protective layer, so our previous signals, defensive mechanisms and means of establishing distance are no longer effective. So now it’s just a thin membrane:

There are implications for this. For the individual I worry about our collective mental health, to be angry, to be made to engage with this stuff, to be scared and to feel that it is more prevalent than maybe it really is. For society it normalises these views, desensitises us to them and also raises the emotional temperature of any discussion. One way of viewing digital literacy is reestablishing the protective layer, learning the signals and techniques that we have in the analogue world for the digital one. And perhaps the first step in that is in recognising how that layer has been diminished by algorithms.

The Digital Scholar revisited

I’m writing a paper at the moment which is revisiting my 2011 book The Digital Scholar, and asking ‘what has changed since then?’. Back in 2011, although elearning had entered the mainstream with widespread adoption of VLEs, much of the focus was on the potential of digital scholarship. A number of studies at the time indicated that adoption of new technology by academics was cautious and often greeted with suspicion. Proctor, Williams and Stewart (2010) summed up the prevailing attitude, finding ‘frequent or intensive use is rare, and some researchers regard blogs, wikis and other novel forms of communication as a waste of time or even dangerous’. Since then a lot has changed, so it was interesting to revisit. In thinking about what has changed since, I’ve ended up with five themes:

Mainstreaming of digital scholarship
The use of digital, networked technology in all aspects of scholarship has become part of the mainstream of practice. Not only is it no longer unusual to meet an academic with a blog or a Twitter account, but online identity is now seen as a central part of what it means to be an academic. Research projects will make use of twitter accounts to both disseminate findings and recruit subjects, online digital databases now form part of a researcher’s toolkit and tools for analyzing social media, VLE and geo data have generated new insights and approaches. In teaching, the advent of MOOCs may have been accompanied by hype but it also raised the profile of online education in general. Digital scholarship is now just part of scholarship in many respects.

The shift to open
Closely allied to digital scholarship is the development of open practice, which can be seen as a third component in the requirements for digital scholarship, building on digital and networked aspects.
In education ‘open’ has become a modifier for many terms, giving rise to open textbooks, open data, open pedagogy, open science and open educational practice. The increase in profile of open practice then underpins many of the subsequent themes, to the extent that open scholarship may in fact be a more descriptive term than digital scholarship.

Policy development
A further aspect of this mainstreaming is the development of institutional, regional or national policies with respect to different aspects of digital scholarship. Most prominent of these are the development of open access mandates which state that the outcomes of research funded by a particular body need to be released openly. ROARMAP tracks such policies at the funder, research organisation and multiple organisation level. It indicates that in 2011 (when the Digital Scholar was published) there were 387 such policies in total, compared with 887 at the end of 2017, in 68 different countries.
Related to open access publication mandates are policies relating to open data, which state that, as with publications, data arising from publicly funded research projects should be openly available. This area is less well developed than open access publications, but growing rapidly, in part because such policies can build on the work established by open access mandates. For example, SPARC Europe found that 13 European nations had open data policies at a national level, with most having been implemented recently. About half of these used the existing open access policy to expand coverage to open data.

Network identity
Perhaps the area of digital scholarship that has seen the most growth, both in terms of practice and associated research, is that of networked, academic identity. Veletsianos & Kimmons refer to Networked participatory scholarship (NPS) to encompass scholars’ use of social networks to “pursue, share, reflect upon, critique, improve, validate, and further their scholarship”. This has been an area of growth as social media use in general has grown in society.
However, researchers are also increasingly identifying the negative aspects of networked scholarship also. Stewart comments that ‘network platforms are increasingly recognised as sites of rampant misogyny, racism, and harassment’. The initial promise of digital scholarship has often turned dark.

Criticality of digital scholarship
Following on from the recognition of the drawbacks of developing an online identity, is the last of the major trends, which is a growing body of work that examines digital scholarship through a critical lens.
This comes in different forms, but one prominent strand is suspicion about the claims of educational technology in general, and the role of software companies in particular. One of the consequences of digital scholarship and open practices entering the mainstream of education is that they become increasingly attractive routes for companies to enter the education market. Much of the narrative around digital scholarship is associated with change, which quickly becomes co-opted into broader agendas around commercialisation, commodification and massification of education.

Conclusion
While there has been considerable change, it is worth indicating that much has remained unchanged also. The ‘approach with caution’ attitude towards digital scholarship that was prevalent in 2011 still prevails to an extent.
What has been realised then is not so much a revolution in academic practice, but a gradual acceptance and utilisation of digital scholarship techniques, practices and values. This means that depending on your particular perspective, it can seem to be simultaneously true that radical change has taken place, and nothing has fundamentally altered. Much of the increased adoption in academia mirrors the wider penetration of social media tools amongst society in general, so academics are more likely to have an identity in such places that mixes professional and personal.
The relationship between digital and traditional scholarship is best viewed as one of dialogue and interaction between the two, rather than competition and revolution. Using these five themes provides a model for considering how this symbiotic progress will develop. Mainstreaming, the shift to open and policy development will act as drivers for the uptake of digital scholarship across all aspects of Boyer’s framework. Network identity can be seen as the lived experience of these drivers for many scholars, which can act as both an inhibitor and promoter of further uptake. Criticality provides a much needed check on unquestioning adoption, and analysis of the impact on learners and scholarly practice.

Of course, if things had really changed, I wouldn’t be writing an article at all, and instead would just submit a naff meme:

 

Annual film review

I didn’t get to see as many films this year as I’d hoped, but it turned out to be a pretty good year. After a few years where the blockbusters have been uniformly awful, this year’s batch contained some movies that finally understood their role as entertainment (Thor, Wonder Woman) and even had people discussing narrative structures (Dunkirk). Either side of these were films that, like my book choices, couldn’t be divorced from the current climate.

Many of the films that follow were officially released in 2016, but I’m going on when they got a cinema release in the UK. So, here’s my top ten, because who doesn’t love a list:

  • The Handmaiden – Chan-wook Park takes Sarah Waters’ sublime LGBT-erotica-meets-Oliver-Twist novel, Fingersmith (also in my top reads of the year), and relocates it from London to 1930s Korea. He ramps up the sensuality and drama (from a pretty high starting point) to create a sumptuous, beautiful, twisting film that’s like eating all your food from a Belgian chocolate fountain. It’s also a lesson in how to do book adaptation, retaining the central core narrative elements, and more importantly the tone of the book, while creating something wholly its own.
  • Get Out – like They Live or The People Under The Stairs, (or even The Night of the Living Dead), good horror can be an effective social commentator and in Jordan Peele’s claustrophobic tale of white control and liberal appropriation of black values, this movie was so 2017. Sometimes horror that wants to be an allegory forgets to serve its primary focus of being a horror, but Peele’s film spins both plates effortlessly – it’s both straight up terrifying and also a scathing social metaphor.
  • Baby Driver – whereas La La Land was meant to instil you with a joie de vivre, it all seemed too forced, as if accountants had researched the jazz scene. But the joy in Baby Driver is not in life so much but in cinema itself. Every scene seems to be declaring “isn’t this shit great??”
  • Dunkirk – I didn’t rate Nolan’s World War 2 epic as highly as some (I mean how many times can the same guy nearly drown?), but it had plenty to recommend it, in Hans Zimmer’s score, the cycling narrative timeline and the realistic portrayal of air battles. It was a film that made people appreciate the cinematic experience and that’s always worth acknowledging.
  • Raw – I loved Julia Ducournau’s French extremism take on social conformity, family secrets, coming of age, and yes, quite a bit of cannibalism. While the furore focused on people passing out in the cinema (have they never seen any French extremity cinema?) this overlooked what a beautifully shot film it is, with bold use of colour and modernist, painterly structured scenes. Ducournau is a talent to be reckoned with.
  • Lady Macbeth – this bleak tale of Katherine forced into an oppressive marriage in 19th Century rural England is a slow, grinding build to an amoral climax, that makes the viewer complicit in the final act.
  • Wonder Woman – Patty Jenkins’ interpretation of the comic book format was nigh on perfect. Gadot stormed to prominence as easily the best superhero around, the pacing was like an exact 4/4 rhythm, and the tone provided a welcome return to enjoyment away from the Nietzschean angst of the dire comic book adaptations that have gone before.
  • My Life as a Courgette – while the studio Ghibli metaphysical magic realism tale The Red Turtle gained all the plaudits, if I’m to include an animation, I’d opt for this Swiss/French stop-motion tale of redemption in an orphanage. It starts with the eponymous Courgette accidentally killing his abusive, alcoholic mother. I mean, that sounds like a fun movie, right? But it’s incredibly sweet, with a definite, unique visual style.
  • Death of Stalin – Armando Iannucci’s hilarious take on the final days of Stalin finds its way to truth by coming at it indirectly. The actors speak in their native accents, but there’s a strange veracity in, for example, Jason Isaacs gruff northerner portrayal of Georgy Zhukov, that an accurate depiction would not capture. It’s a hoot.
  • For the last selection, choose your preferred one from The Battle of the Sexes, The Beguiled, The Big Sick, Thor Ragnarok, I am not Madame Bovary or Toni Erdmann. Yes, that’s a cop out, but while I liked all of these, none of them particularly clamours to merit the last spot.

As a horror fan the revisiting of It was enjoyable, prosaic post apocalytptic movies had a bit of a run with It comes at night and The Survivalist, and there were some inventive low budget productions such as I am the pretty thing that lives in the house, The Autopsy of Jane Doe, A Dark Song and Devils Candy..

Overall it was a good, but not great year. Perhaps the most noteworthy trend is the growth of decent female representation both as central characters and directors. Nowhere was this more evident than with a comparison of the two films to feature Wonder Woman. Whereas Jenkins’ film was a delight, Snyder’s was ponderous and mediocre. Of the films mentioned above I think Wonder Woman, Raw and Get Out are the ones that will have staying power.

css.php