Over @unsw#category/LA group to hear @gcrisp01 on ‘how could #assessment data be used to enhance assessment practices?’

— Simon Knight (@sjgknight) March 1, 2016

The talk was based on an interesting paper:

Engaging academics w/ analysis of their MCQ assessment tasks https://t.co/SBgoA5kT0b#category/LA@sbuckshum@bodong_c@saramuyo

— Simon Knight (@sjgknight) March 1, 2016

In that paper, a variety of methods to analyse and represent assessment data are discussed, along with analysis of academic’s responses to the representations. The aim of the analysis is to provoke consideration of the kind of question (MCQ) being asked, and what it is attempting to probe – whether the academic wants to check all students have the same (correct) response (convergent style), whether they want to explore the range of perspectives a group is taking (divergent style – presumably with some open-text field too), whether they want to diagnose particular misconceptions, etc. (diagnostic style). If a convergent response is aimed for but a range of responses given, that implies a problem with either the student’s learning, or perhaps with the way the question is expressed. Similarly if a divergent question is asked but most students select one or two responses, this might suggest the question isn’t doing the work we want. So in the paper they look at how this data might be used to adjust or inform assessment practices. They go through various measures (e.g. facility index, discrimination index) to explore how effectively questions discriminate between candidates, and e.g. whether some questions are tripping up better students (perhaps implying a poor question), whether some question options aren’t being selected at all (suggesting they could be amended or dropped), person-item maps to explore e.g. whether harder questions could be placed later in the assessment, and so on. Geoff noted that in the drive for reliability, concerns around validity and the purposes for which questions are deployed can be obscured. But particular tasks (and, learning analytics) imply perspectives on learning and knowledge to both academics and students – and this matters. He also suggested that currently most learning analytics are of a ‘convergent’ (knowing what, clarity of knowledge), kind, and that there are big challenges in mapping activities to constructs of interest. Unsurprisingly, I have a lot of sympathy with both claims (per our LAK papers!). What’s interesting though is how we might use the approaches in MCQ analysis to think about informing educator practice; so rather than analytics having a direct route to intervention, they instead inform with regard to intended learning processes/practices.