Automating assessment to understand assessment
In the last of the series of seminars I have been hosting at the OU, my colleague Professor Denise Whitelock talked about her work on assessment. Denise takes us through a number of projects she has worked on, which have automated aspects of assessment. These have always had a strong conceptual underpinning, for instance Dweck’s work to develop Open Comment which provided feedback to Arts students. With Open Mentor, she used Bale’s work on interactive categories to help tutors develop effective and supportive feedback. And SafeSea allows students to trial essay writing before taking the sometimes daunting step of submitting their first one, using analysis based on Pask’s conversational framework.
What I found interesting about this work was that it provided an example of how technology is situated in the human education system. None of these systems were designed to replace human educators, instead they are intended to help learners and educators in their current pursuits. It can be seen as an iterative dialogue between the technology and the people in the system. For example, with Open Comment Denise reports how she acted as a student and did not perform well, having come from a science background. She effectively had to learn ‘the rules of the (Arts education) game’. By making these explicit for the tool, they could then help learners develop them, when before many educators had been doing this only implicitly. This seems to me the appropriate way to approach educational technology, to see it as a component in an ongoing dialogue.
I’ll let Denise detail each of the projects and future developments in the talk below: