25yearsedtech,  ALT

25 years of edtech – 1993: Artificial Intelligence


This year marks the 25th anniversary of ALT. I’m co-chairing the ALT-C conference with Sheila MacNeill, which celebrates this in September. This got me thinking about the changes I’d seen in that time, and so I’m going to attempt a series of blog posts that use this as a vehicle to explore the developments in ed tech over the past 25 years. It may end up like Sufjan Stevens project to write an album for every state, and I won’t get past two or three, but let’s give it a go. Also, in order to fit it in, there may be some twisting to fit a tech into a year, and it’s not necessarily the year the technology was invented but rather when I came to recognise it. So, with those caveats, let’s set off. It’s 1993, I’m a PhD student in Middlesbrough, it’s just before Nirvana and Oasis break, the Stone Roses and Madchester have peaked… (screen goes wavy)

I’m starting with Artificial Intelligence. This is partly because in 1993 I was studying a PhD in AI applied to aluminium die casting (I know you want to read my thesis). But it’s also partly to demonstrate the cyclical nature of ed tech. In 1993 AI was going through its second flush of popularity, following on from initial enthusiasm in the eighties. The focus was largely on two approaches: expert systems and neural networks. These were contrasting approaches: expert systems tried to explicitly capture expertise in the form of rules, whereas neural networks learnt from inputs in a manner analogous to the brain. The initial enthusiasm for Intelligent Tutoring Systems had waned somewhat by 93. This was mainly because they really only worked for very limited, tightly specified domains. You needed to predict the types of errors people would make in order to provide advice on how to rectify it. And in many subjects (the humanities in particular), in turns out people are very creative in the errors they make, and more significantly, what constitutes the right answer is less well defined.

Expert systems though were pushed as teaching aids also – if you captured the knowledge of an expert, in say, medical diagnosis, then this forms a useful teaching aid. My experience in developing an expert system to diagnose problems in aluminium die casting is probably symptomatic of the field: it sort of did the job, but didn’t really catch on. The problem was twofold: the much quoted ‘knowledge elicitation bottleneck’ and the complexity of real world. The first meant getting the knowledge from experts in a format you can use. Apparently you can’t just drill a hole in their heads and tap it out like siphoning petrol from a car. Experts don’t always agree, and making expertise explicit is notoriously difficult. What characterises an expert is that they ‘just know’. The complexity issue means you can’t predict the way things work out. For example, we characterised typical flaws (and provided a very nice database of images). But sometimes these co-occur, sometimes they look different, sometimes the causes can be multiple.

AI faded after this for a while, only to resurface with a vengeance in the past five years or so. I may revisit it later, so I won’t say much about the current instantiation. What is interesting I think is that the claims are much the same (although they often think they have invented them for the first time), and some of the problems remain. However, what has really changed is the power of computation. This helps address some of the complexity because multiple possibilities and probabilities can be accommodated. In this we see a recurring theme in ed tech: nothing changes while simultaneously everything changes. AI has definitely improved since 93, but equally some of the fundamental issues that beleaguered it still remain.


Leave a Reply

Your email address will not be published. Required fields are marked *