This post effectively brings together two preceding ones, namely elearning and learning objects. By the turn of the millennium, elearning was everywhere. The internet was no longer dismissed as a fad, and you could make yourself a guru by spouting a few homilies about the death of distance and the like. After the initial flurry of activity, typified by a wild west approach to creating your own website (I’d like to say that academics have a flair for website design, but, erm, we really don’t), there was a necessary, if slightly less fun, concentration of efforts. This meant developing platforms which could be easily set up and run elearning (oh, yes, we’ll come to VLEs later), a more professional approach to the creation of elearning content, the establishment of evidence (which generally found there was no significant difference), and initiatives to describe and share tools and content.
Enter elearning standards, and in particularly IMS. This was the body that set about developing standards to describe content, assessment tools, courses and more ambitiously, learning design. Perhaps the most significant standard was SCORM, which went on to become an industry standard in specifying content that could be played in VLEs. Prior to this there was a lot of overhead in switching content from one platform to another.
Perhaps the standard that brings any ed tech people out in a sweat is that of metadata, and particularly the Dublin Core. This was used to describe a piece of content (such as a learning object) so that it could be discovered and deployed easily, and hopefully automatically. The reason that mention of Dublin Core still induces wry chuckles is that at the time it was largely human derived (the always prescient Erik Duval used to preach “electronic forms must die”). You spent ages crafting a nice activity and were then presented with 27 fields of metadata to describe it, which often required more effort than the initial content. This was obviously not an approach that would scale. And some of the fields remain a mystery to this day (semantic density anyone?). As well as simply being a pain, this level of description also became restrictive, in that it seemed to define exactly how the content should be used.
As a nostalgic aside – if you currently bemoan your VLE usability, tender me your sympathy when around this time I was developing one of the pilot courses for the ill-fated UK eUniversity. This built a whole new platform, based around learning objects. Every object needed to have metadata entered by hand. If you made a change to the content, for example correcting a typo, the nascent platform lost all the metadata and you had to enter it all again. So don’t come crying to me about your Blackboard!
Elearning standards are an interesting case study in edtech. I must admit that after being quite heavily involved around this period, I lost track of them. But that in a sense is a sign of their success. Good standards retreat into the background and just help things work. But it’s also the case that they failed in some of their ambition to have easily assembled, discoverable plug-n-play content. The dream was that you’d type in “Course on Burt Bacharach” and it would automatically assemble the best content, with some automated assessment at the end. This wildly underestimated the complexity of learning (and overestimated the good quality Burt Bacharach learning objects). So while the standards community works away effectively, it was surpassed in popular usage by the less specific, but more human approach to description and sharing that underlined the web 2.0 explosion. But (as they used to say at the end of Tales of the Riverbank), that is another story.