[Deciding Which Door to Choose 2]1While visiting Rutgers a couple of weeks ago I had the chance to talk to Clark Chinn’s group in the Graduate School of Education. One of their number (a visiting student from Germany) was thinking about how to setup their empirical work, around asking pairs of students to read a small number of documents (~3) to explore how they evaluated and made sense of them, and what information they retained from them. So we had a long group discussion about the various merits/demerits of different approaches. This is an area of work I think is really interesting, and that in some ways I don’t think I did particularly well in my own research. In my work, in one of my tasks I asked pairs of students to read 11 documents, and then in a post-survey, to individually give a 1-10 score on each of the 11 documents regarding how trustworthy they thought the document was. This approach gives you each individual’s assessment, a rating of trust (as opposed to a ranking), and allows some insight into how trust is being assessed beyond looking at the trace data/chat, etc. But, there are various problems with this approach, not least that in my case the documents were very complex, so understanding student’s ratings of trust is also complex – for example, should a blog be rated less trustworthy than a journal article, even if the journal article is being debunked by the blog? An alternative approach, and one which I rather like, is a kind of ‘pairwise comparison’ (or ‘forced choice’) paradigm. In his PhD work [Chris Leeder]2 (now at Rutgers) did some work like this where participants were asked to: 1. select and evaluate some documents to address a task 2. look at other student’s selections and evaluations 3. from a subset of 2 of their combined selections, select the ‘best’ source for the particular topic The value here is that you might start to play with which sources are most interesting to compare, perhaps even in particular contexts. So, for example, a generic blog v a generic journal article choice we might expect participants to select the article, while in a particular case (such as mine) a more sophisticated perspective would be to select the blog over the article (analysis of such a selection would be aided by text feedback too). So I’m now considering types of tasks we might be able to construct where we ask students to: 1. select the ‘best’ resources from a set of resources (without limiting) 2. ask them to pick out ‘must read’ resources for a partner 3. ask them to draw key material from the resources (e.g. to explore how particular text is used, whether some resources are being used explicitly or implicitly, etc.) 4. explore some choice preferences as described above We could build in experimental conditions here, as e.g. Kobayashi (2014), who controlled texts while changing source features (e.g. authorship), finding students paid attention to these features but rarely made reference to them in their justifications regarding which source was superior.

Kobayashi, Keiichi. “Students’ consideration of source information during the reading of multiple texts and its effect on intertextual conflict resolution.” Instructional Science 42.2 (2014): 183-205.

Footnotes

  1. /static/2015/05/deciding_which_door_to_choose_2.jpg

  2. https://www.si.umich.edu/people/christopher-leeder