Google Coursebuilder and Search Education

Last night I went into Google (something very surreal about saying that, and about doing a search From:”Hotel California”; To:”Google”…).  Here’s a picture of me with a big google sign…

A photo of me at google

I found Google






Google (and other search engines) is of interest to me for a few reasons:

  1. I’m interested in the epistemic properties of search engines such as google – how does the way search engines present queries and results imply particular stances towards or values around information
  2. I’m interested in the ways users actually interact with such systems, and the ways their epistemic commitments (commitments around evaluative and credibility standards) ‘come out’ in search activities
  3. Google actually does research on this, and runs education programs – and that’s just kinda cool

What I went in for was to have a chat to Sean Lip – who I’d met at LASI – about the Course Builder, the Oppia tool, and an idea around a sort of validation task for understanding epistemic commitments: asking students to summarise information from a number of documents about which the researcher has prior knowledge on salient measures such as argument style (authority, justification), and corroboration (e.g. the same token repeated across multiple documents).  Evaluating user search ability is hard, particularly on open ended exploratory tasks.  But by giving assessment such as multiple document processing tasks we can start to explore whether users seek to corroborate information (generally, although not always, a good thing), and their evaluative standards regarding arguments (and again, we want students to be critical – but not Descartes).

So we had a chat over dinner about the potential for developing a custom view in the Oppia tool which would lead students through a set of questions, in the first (proof of concept) instance from:

  1. A multiple document style question of the above sort
  2. An ‘agoogleaday’ style multiple-component search task but with a correct (or set of broadly correct) answers (i.e. something we have a good assessment standard for – and we could add on other features such as wanting corroborated sources)
  3. A more exploratory style search task

The first two provide fairly decent feedback on user commitments, form which one hopes we could produce some clusters or groups of student approaches (indeed, my MPhil work amongst other research indicates this is likely).  Based on those, it would be interesting to then see whether or not there were significant differences between the groupings on the MDP task, and those on exploratory tasks – for example, do users who don’t corroborate sources tend towards more perfunctory searches (one assumes they do).  The scope to then a) give feedback on (1) tasks to train for (3) tasks or/and look for habits of search, which could receive somewhat tailored feedback for suggestions for alternative strategies is pretty interesting I think.

Now, there are some issues here – for one thing, just because someone refers to a point made in a ‘bad’ document, it doesn’t mean they’re asserting the claim.  Concept maps in which we force students to use semantic associations (so we can see which claims are being critiqued, etc.) are one way around this issue.  Task design – so, something like “select the claims you think are the strongest from this document set” might also be an approach, although it raises issues with corroboration (because it probably pushes users to use multiple sources more than they might otherwise).  This is something that’ll need working through.

Anyway, I was really impressed with how critical (in a great way) Sean was and I’m really excited to see how things develop!

(Note on Oppia: “The aim of the Oppia project is to build a versatile tool that enables non-technical users to create online educational explorations that are feedback-rich, incrementally improvable by the community, and embeddable in any webpage.”

If you take a look at the Oppia demo you’ll see some custom feedback if you go for one common response.  This is being mechanical Turked at the moment, but the hope is for some machine learning approach in the future).

Print pagePDF pageEmail page

This Post Has 3 Comments

Leave A Reply

%d bloggers like this: