Despite the book 25 Years of Ed tech finishing with 2018, I’ve kept it going with one entry for each year since. The criteria for selection was the year I think they became significant, in that people talked about them a lot. And your annual reminder that inclusion does not denote approval (some people struggle with this).
So, AI generated content eh? The year started with fun AI generated images and ended with ChatGPT promising the end for humanity as we know it. This can produce genuinely decent outputs, and so the phase of just dismissing it as inferior is not a valid approach. The obvious potential victim of decent AI generated content is the student essay. All those years we’ve been paying TurnItIn to build up a massive database of student essays really paid off for higher ed, I guess?
I’m not going to review the technology here, but rather what I think it is interesting about it is the questions it makes higher ed ask of itself. That is significant, regardless of whether you’re an AI advocate or rebel. There are several such questions I can think of, but you may have more:
What does assessment look like with easily generated content? This is the main focus for many. The initial reaction from many HEIs will be to clamp down I fear – more in person exams, increased proctoring, death penalties for using AI. And that will solve the problem for a bit, but it doesn’t really get at the issue. What will be more interesting is to acknowledge the existence of such tools and potentially build them into assessment, for example having students generate AI answers to essays and then critique them.
But more significant will be how do we change assessment? This raises the further question, of what is assessment for? It’s interesting to me that current AI tends to produce effective, but slightly bland answers. For decades we have been instructing students to remove their personality from their writing, to be coldly objective. But it transpires this type of writing is something AI can do pretty effectively. What it struggles at is the individuality or personality in writing. Having spent so long carefully extracting aspects of humanity in HE content we may now need to find ways of reinserting it.
Then there is the question of not what form assessment should take, but should we assess at all? The ungrading movement argue that much of it is detrimental to learning anyway. Even if we don’t witness widespread ungrading we may see a move to more authentic tasks and a softening of the high stakes exam (although as noted above we’ll probably see a rise in this initially). We’re now at the stage where AI can generate decent essays and a different AI system can do a respectable job in marking them. The students and lecturers can then retire to the cafe and get on with discussing the interesting stuff.
Then we might ask how can we use it for teaching? If reasonable essays, OERs and teaching content can be produced automatically, why spend ages crafting material? It may be better, but is it the cost of 50 person hours better? As with assessment, the approach may be to generate content and then teach around it, supplementing, explaining and supporting. For example, Mike Sharples has a nice story generator that would make a useful English writing aid by generating stories and then deconstructing them.
There are a whole host of ethical, privacy and sector questions also. What biases are built into these tools? What commercial enterprises will take over aspects of HE as we’ve seen TurnitIn and Proctorio do in places? Is it ethical for students to use these tools? Is it ethical of academia to create situations where doing so is useful?
It’s a messy world, but certainly the output of such systems had a breakthrough moment in attention this year. Shouting “go away!” is probably not a viable reaction now, and so the sector needs to get to work asking and answering the tricky questions. In an odd way, I find it quite positive – the answer to many of the questions seems to be: Be More Human. And that’s no bad thing, surely.