AI,  higher ed,  VLE

Don’t you want me? Questions to ask of new AI-VLEs

In my last post I was doing a backwards glance prompted by the Accenture-Udacity deal. In this one, I’ll look forwards. Apart from the MOOC angle, the other key aspect of the announcement was the investment in AI-enhanced learning environments.

In terms of learning environments, the VLE/LMS has been the main player since around 2002. Prior to this there was a mixed economy, combining different commercial solutions, home spun set ups, open web tools. It was both a delightful cottage industry and something of a wild west. From the turn of the century the shift to an institutional wide, enterprise solution became inexorable until by the mid-2000s pretty much all HEIs had a VLE of some sort.

From then it has largely been a case of integrating new technology into existing systems – the addition of learning analytic dashboards, different forms of assessment, improved content management, etc. But it has been a steady, regular set of improvements with no major shifts once the main players settled down. Someone from 2004 would not be at a loss presented with a VLE from 2024. This is not a bad thing, as I said in the last post, the notion of technology revolutions is oversold.

But with the advent of AI we may be seeing the first major shake up in the industry for a while. I should stress, I am not being a cheerleader for these new platforms, there will be plenty of breathless excitement about their potential to transform education, revolutionise universities, etc. But there will be plenty of sales pitches made to HEIs, and pressure to change or upgrade, as well as new possibilities to legitimately explore. What I want to explore here are the questions we need to ask of such platform vendors, and ultimately, of ourselves as educators.

Electric Dreams

Let’s consider what the new AI-enhanced platforms will offer. There’s likely to be a few key elements:

Personalisation – the desire for personalised education is never ending. It is largely unquestioned – of course, more personalisation is better. We’ll come to that later, but one offer will be that AI can enhance personalisation for learners. If a student can’t understand this content, then they can generate further versions. Plus they can have intelligent on demand tuition (see below)

Content creation – being able to generate content, or have your own content AI checked will be a strong offer. You can create smart looking online course quickly, allowing you to concentrate on the important stuff. Labour saving is a persistent promise of technology from washing machines to driverless cars.

Enhanced content – as well as creating content, what will be offered is different types of content – simulations, virtual worlds, lectures from historical figures. This type of content has traditionally been very costly to create. Now, maybe not.

Assessment and feedback – AI will generate automatic assessment, that will be unique to each student, so cannot be cheated and will offer immediate feedback. More labour saving now you don’t have to create those pesky multiple choice quizzes. It will also check for cheating by students

Monitoring and Control – AI enhanced analytics will monitor student performance, and either alert the instructor or offer interventions itself (An Einstein looking version of Clippy pops up “You seem to be struggling to understand quantum equations, would you like additional support with that?”).

Tuition – the very early forms of AI were often focused on the development of Intelligent Tutoring Systems. They failed because mapping a domain turned out to be way more complex than people anticipated. But generative AI doesn’t need that mapping, it “just” crunches the numbers on everything it has. So a friendly looking AI tutor can offer support on just about any topic the student wants help with.

I’ve probably been a bit facetious with some of the above, but for a learner, judicious use of these tools might be pretty helpful. Students, particularly those studying on their own at a distance, will often stumble when they don’t feel they understand something and it prevents them from progressing in that study session. If you live with a house of students you can go and ask one of them, or pop a question into WhatsApp. But if you don’t have these networks to hand, an AI tutor could be very useful in helping you continue, for example.

Keep feeling fascination

All of which brings us onto thinking about the type of education we want to foster, and the role such a system will play in that offering.

What about the community – in all of the elements above, social interaction between humans is missing. There may be some AI element to this (chatbots in forums?), but it’s probably appropriate that we don’t really want AI in there much. But it is often a crucial element in education. In some research we’ve done with Open Degree students, it’s become evident that the question “how does this impact student community?” is one that is rarely, if ever, asked in IT procurement decisions. Community is often just assumed to arise in campus universities as a by-product of co-location. But if we move to more hybrid, online models then this cannot be taken for granted, it has to be more explicitly fostered in the learning environment. How does the new platform do that?

What about the cohort – similarly, if we promote personalisation, that comes at the cost of a shared sense of cohort. If my cognitive psychology is structured differently to yours, what are we sharing in terms of common experience? Progressing through with a cohort is another social connection component that is very powerful for some (not all though) students.

Quality issues – the more that content and support is outsourced to AI the greater the danger of errors sneaking in. Apparently humans do not generally have six fingers. Given the link to vocational and professional awarding bodies there will be a strong legal component to this and quality control.

Ethics – it’s probably not a question asked often enough, but there will be a whole bunch of ethical issues with the large scale adoption of AI. Does it rely on cheap labour elsewhere? Is it building of existing content without permission? What if a student has ethical objections to its use? What are the mental health implications of students feeling that they are continually monitored? Is it ethical NOT to use AI when the students will be encountering everywhere in society? Plus many more we haven’t thought of I expect. These won’t be easy to answer but they need to be at the forefront of any procurement decisions, not awkward after thoughts

Being boiled

If I was to draw out one theme from observing learning environments develop since the mid-90s, I would say that we see a constant tension between control and freedom. And more control nearly always wins. It’s probably cooler to be on the freedom side of this equation, but the control aspect is not without its merits, particularly for learners, and doubly so for learners at a distance. A more controlled environment is one the institution can support and manage effectively, and therefore offer support. You really don’t want the technology to become a barrier for distance learners and having an environment you can help with easily is a must.

Plus it helps students know that they’re on track – “am I doing the right thing? Am I doing enough?” are common concerns for many learners and you don’t want to add in constructing your own learning environment into that.

Then we add in GDPR, duty of care, privacy and security issues and increased control of the platform seems like the only sensible decision. But it comes at a cost for learners and educators. The open web is more like what they will experience in ‘real life’ and it also provides rich resources, and modes of expression that the stripped down, sanitised VLE cannot. But it also contains toxic behaviours, rampant commercialism and misinformation, which are problematic in education.

The point of this ramble about control is that dynamic will come into play once again with AI-enhanced platforms. If my conclusion that control wins holds true here also, then we need to consider what a very controlled, locked down version of such platforms look like. These could be the worst of both worlds – the invasive, inflexible monitoring of AI systems without any of the freedom to use those tools creatively. There is a line of questioning to be followed around the nature, and desirability, of control that these platforms will offer.

We’re only Human

Ultimately the most interesting thing about the application of AI in any domain, including education, is what does it reveal to us about being human? (And not, how much money can we make from this?) This will be at the heart of any adoption of AI learning platforms. What does it mean for the human learners? What do the human educators do in this environment? How do we promote and support interaction between people?

These are big questions, and they are not the type of ones that crop up on a features list when conducting procurement. The danger then is, that they never get asked. We shouldn’t see this new wave of VLEs as just an upgrade, a new version on the roadmap, because then in ten years time we’ll find we didn’t ask these more fundamental questions and we’re too deep in now with the configuration of platforms to reshape them. It may be awkward, but in the next few years as HEIs consider new platforms, make sure there are people in the room who ask these questions.

Anyway, I’m not sure why I went on a Human League riff with the titles here, perhaps because students and educators are the Human League in this new world and need to stand together. So, here are the aforementioned League, with that hairstyle and that song:

2 Comments

  • Alan Levine

    I agree for better or worse it’s gonna be a lot of shake up. But it’s more than enhanced content, it’s content that’s not fixed. So much of what have built, the course material, the OERs, the media packed open textbooks, are mostly fixed in a published version. The idea of dynamic, reactive content has both fascination and well, bad riffs as potential.

    I clicked the video. Can AI get rid of the ear worm?

    • mweller

      Yes, the non fixed nature is a real problem for quality and legal issues, particularly for professional bodies and vocational courses. Anyway, altogether now “I was working as a waitress in a cocktail bar…”

Leave a Reply

Your email address will not be published. Required fields are marked *

css.php