True voyage is return

In revisiting the 25 Years of Ed Tech book for the 30 years podcast, I’ve been struck by how often I find myself saying things along the lines of “we’re seeing this again now with AI” or “this came to the fore again during the pandemic”. The snobbery about elearning that was espoused during the late 90s? It was there again in the attitude towards online learning after the pandemic. The myth of cheap elearning? See the excitement over AI generated content. The desire to share and reuse learning content easily? Revisited during the online pivot. Second Life islands and virtual campuses? Hello metaverse. And so on.

I guess it’s no surprise – we build on knowledge and that’s how it develops. But what I am somewhat surprised by is that often these arguments don’t seem to have evolved. We’re just playing the same old tape again.

To some extent, I am guilty of this too – I can often be found suggesting that maybe now is the time for OER to really make its breakthrough. So, here’s another of my “the time is ripe for” hot takes – symbolic AI, please step forward. I did my PhD in symbolic artificial intelligence, specifically expert systems. There were two main approaches in AI (not to be confused with the five schools of thought regarding their implementation) – which can be categorised as symbolic, rule based approaches and machine-learning. Top-down and bottom-up is another way of looking at it. The early approaches were often in the former camp, which seeks to replicate human knowledge in specific areas through the construction of rules and scripts. This is how to behave in a restaurant, this is how to diagnose an illness, etc.

Machine learning says sod all that, lets chuck vast amounts of data and let the system derive patterns. This has of course, proven to be the horse to back in the AI race. Data is not in short supply in a digital, connected world and Moore’s Law has driven computer processing power to the point where it can generate authentic content. They are impressive, but also dumb. For example, they will generate false (but convincing looking) references or images where people have three arms. People have therefore started to suggest that maybe adding in a representation of a field can tweak or filter the results of these data driven models, for example a mixing of symbolic AI with the power of LLM (large language models).

You can see how this might work – check the references produced by an essay generator for instance, or run the medical output through an expert system. This is where academics come in. This is the type of knowledge they hold, they are the experts in the expert system. One of the early concerns about expert systems was that they would make human experts redundant. They weren’t good enough, but this hybrid version might be, so the point is to be aware I guess. Have control about what goes into the system and where it will be used, because if we don’t others will so get ready for the return of symbolic AI. Probably.

Leave a Reply

Your email address will not be published. Required fields are marked *