Algorithms and [big data are provocative]1 right? Is [Google personalisation a risk to our epistemic autonomy]2? Are [recommender systems epistemically problematic]3 in general? Is an over [reliance on search engines]4 to know what we want – even if we don’t – making us stupid? What’s interesting about a lot of these discussions is a) they tend to focus on google, and b) they tend to be very all or nothing, and ignore the fact that the tools change, and that they include various facets to them (of course, this isn’t true of all such discussions). They also tend to make pretty big value judgements re: what it’s important to know without a) considering what we mean by “knowledge” (a philosophical question) and b) without an appreciation of the notion of “powerful knowledge” (a sociological question). This blog I want to focus on two things: 1. Personalisation – the algorithmic customisation of search results for individuals, based on a variety of sources such as: * geo-location * previous searches and clicks from SERP * perhaps things that you have shared, or that your friends have shared with you * any other trace data a search engine can capture about you (including previously visited sites) 2. Personal recommendation – the algorithmic matching of search queries to suggested sources/informants (perhaps friends, or organisations) and/or other related search queries At the moment, this is perhaps best characterised by personalisation of results, and Google’s desire to chip-in and “[know what you want before you know it]5 on the one hand, and Facebook’s recent announcement of “[Graph Search]6“.  Actually, Bing has for a while now integrated with facebook on a similar principle in their “[Bing Social]7” project (irritatingly not available in the UK) – headline “For Every Search, There is Someone Who Can Help”. What’s the shift here: 1. Pages can be mapped to queries – the best connected pages around any given term are the most relevant to the solving of the query, search should return a full set of relevant documents (recall) and not irrelevant ones (precision).  Epistemically – the SERP informs on pages of relevance, and they provide information; relevance is whole community oriented (subject to SEO……) 2. Google has for some time been subtly shifting away from this by providing fewer results, providing snippets to answer queries directly within the SERP (and Wikipedia extracts), and personalising search results based on geo-location, prior searches, etc. this is a shift towards an ojectivist epistemology in which needs are mapped directly to results, it changes the role of google from informant about potential informants, to directly informing, and risks excluding particular sorts of results (see this [post]2) 3. Bing is doing the same thing to some extent (and in fact, Google does include Google+ results in its search I believe…but so few people use + it’s a bit of a moot point).  The different strategy Bing and Facebook are taking is to shift in the other direction – instead of informing about relevant pages (which are abstracted and impersonal) Facebook Graph Search will inform about potential informants amongst your friends this is a different epistemic function.  Instead of focussing on mapping needs to results, it focuses on the personal connection and community based outcome.  It’s a recommendation, not a personalisation, and it’s easier to see (and interrogate – to give and receive reasons) why tailoring to the individual might occur. Now, I’m not some naive idiot, I know all three companies are trying to make money from their strategies.  But I don’t think that means we should ignore the epistemic element in what they’re doing (indeed, if we’re Virtue Epistemologists we might be particularly interested in this factor!)  Of course, there’s also the issue that both of these strategies might be confirmatory in nature – personalisation and personal recommendation both rely on past behaviours and are both susceptible to confirmation bias.  That’s true, but in the former case it looks quite hard to  break that pattern, in part as explained by [Thomas Simpson in his article]2 regardless of how virtuous an epistemic agent one is, while in the latter case such change might well be possible – although this will depend on how searches are conducted, results presented, and transparency on the alternative results available (which Graph Search might currently fall down on). (BTW this [link]8 via [Robert Farrow]9 on Twitter is a good example of why although the principle of Graph Search might be interestingly different, it might actually be worse than Google personalisation in terms of bias and dirty data)


  1. “Six Provocations for Learning Analytics”

  2. “Evaluating Google as an Epistemic Tool” 2 3

  3. “Evaluating recommender systems as epistemic tools”

  4. “Is Google making me [stupid|smarter]…how about Bing?“