a robot playing a flute - generated by AI
AI,  wrongness

Things I was wrong about pt 4 – AI

After admitting I was wrong about QR codes, the death of the VLE and the democratising power of social media, we’ve arrived at the inevitable one I suppose.

I want you to insert the biggest sigh you can imagine here – <sigh>. This is an example of when some knowledge can be a bit disadvantageous. I have a PhD in Artificial Intelligence, from back in 1994 and I joined the OU as a lecturer in AI. So, yeah, I should know a bit about it. I was however, largely dismissive of it, partly because I was grounded in symbolic AI (expert systems and the like), and had not really monitored the rise of large, language models and generative AI. I once shared a stage at the Hay Festival with Marcus Du Sautoy (I know, get me!) who was talking about his book on AI. He was enthusiastic about the potential and I was downbeat. He was right in the sense that it is undeniably a big thing. That doesn’t necessarily mean it’s a good thing, but in 2024, we can’t deny it’s a thing.

The reasons I misjudged it are twofold. Firstly, I think it was bias. I wanted it to have a minimal impact because I’m wary of the social, economic and democratic consequences. In the light of where we are now, I was right to be suspicious, but wrong to confuse my personal desire with the actuality. This reaction is still playing out a lot with AI I think – people who are happy to find problems with dodgy AI output want to feel they can then safely dismiss it. There are dodgy AI outputs (how many fingers does that person have?), but feeling smug and and therefore safe to dismiss it is probably wishful thinking. The second reason was the knowledge I had made me concentrate on one aspect at the expense of another. I underestimated just what brute force computation can do. Forget finesse, and cognitive modelling, just throw terabytes of data at algorithms and they will find patterns. I am being harsh here of course, those training algorithms are very sophisticated, but the point is we didn’t need to replicate domains of knowledge, with sufficient datasets the number crunching could generate reasonable output without us telling it the rules.

I’m not going to go into the numerous potential benefits or severe concerns about AI here, God knows there are enough of those pieces around. The focus of these posts is to think about why I was wrong, and what that tells us (or me anyway) about future practice. The concern I had about symbolic AI may yet come to pass, because at some point AI is going to run out of good quality training data – this article suggests between 2026-32 we will run out of human generated content to train LLMs on – and then it’s going to be training on AI generated content. That is going to homogenise output even more (Dave Cormier’s metaphor of AI as the autotune for knowledge will become increasingly true). Apart from attempting to secure better content and savaging each other’s economic model, AI companies will then have to turn to more symbolic methods to enhance their ‘dumb’ AI models. And we get back to familiar issues then. But by then, maybe we’ll all be working in content warehouses anyway.

I think the impact of AI is wildly exaggerated and we’re probably heading for a bubble burst for all those companies investing heavily in it. But this doesn’t mean it isn’t having a significant impact. Even if we just look at the impact it is having in higher education, and the work required to rethink assessment, then it is pretty pervasive. So, I’m a bit embarrassed that I underestimated this. Sometimes though, I wish I had been right.

Leave a Reply

Your email address will not be published. Required fields are marked *

css.php