Following on from my ‘[Evaluating Google as an Epistemic Tool’]1 post I’m just exploring the Open University’s [RISE]2 and the related [OpenURL]3 projects both of which use log data on academic searches to provide users with article and journal level recommendations. This is epistemically interesting for at least two reasons: 1. User surveys indicate they are more cautious of recommendations when they don’t know where they’ve come from, whether they’re relevant to their course, whether they come from lecturers or other students, and so on.  These are epistemic issues regarding the abilities of the recommendation system as an informant.  It’s an interesting question (I think) whether such recommendations become “ok” when the whole epistemic community is surveyed.  This is sort of related to my suggestion in the ‘[Evaluating Google as an Epistemic Tool’]1 post that the key was that suggestions come from a survey of the whole web.  However, the case is reversed here, where we assume the engine surveys the largest set of articles possible, but we’re also interested in it surveying the most relevant community of enquirers – we want both precision and recall in this context., but the ‘filter bubble’ concern is as relevant here as ever.  I’m not sure what the solution is to this. 2. Most people would presumably take recommendation systems in this context to be quite useful ways to facilitate novice’s integration into a community of enquirers, understanding the field and sensemaking on it – indeed, this is one of the points of content curation systems, etc.  

Footnotes

  1. http://sjgknight.com/finding-knowledge/2012/11/evaluating-google-as-an-epistemic-tool/ “Evaluating Google as an Epistemic Tool” 2

  2. http://www.open.ac.uk/blogs/RISE/ “RISE”

  3. http://openurl.ac.uk/doc/data/sample.html “OpenURL”