After [my LAK13 talk]1 someone (sorry, forget who) asked (roughly): “you don’t actually think you can use learning analytics to tell us about epistemology do you?” In this post, I’ll first discuss the answer to the (intended) question.  I’ll then go on to discuss a separate point – data (big or otherwise) might give us some information regarding the nature of epistemology and epistemological claims as I’ll discuss below. Disambiguating Epistemology and Epistemic Cognition In this case, there was a confusion between epistemology – the theoretic stance towards the nature of knowledge – and epistemic cognition – the intrapersonal cognitions regarding what it means to know. The reviewers rightly pulled us up on this in the paper, and I hope we addressed it there (if not in the talk) by highlighting the distinction between epistemology and epistemic beliefs (or behaviours/actions/cognitions/talk, etc.) saying: 1. Epistemology: Which we introduce in the first sections of the paper, is related to the philosophical analysis and conceptualisation of curriculum content and assessment for knowledge (in our educational context) 2. Epistemic Beliefs: relates to the intrapersonal, psychological conceptualisations that individuals hold regarding knowledge Thus, the former epistemology, is related to: * The ways that we assess, the sorts of tasks we set and the kinds of learning we believe to take place (and aim for) are bound up in our notions of epistemology. LA are not objective or neutral: data does not “speak for itself” but has been designed by a team who, implicitly or explicitly, perpetuate the pedagogical and epistemological assumptions that come with any assessment instrument. * The Danish example shows concretely how epistemology relates to assessment regimes. When knowledge is seen as something that can only be evidenced in contextualised activity, and when it is embedded in one’s physical and digital environment, the role of the internet is redefined as a metacognitive tool which cannot be excluded in assessment. While the latter is closely related, epistemic cognition comes into play because: * The sorts of assessment, and pedagogy, which students are exposed to will relate to the types of epistemic challenge they encounter in their education – systems with a focus on ‘right answerism’ and limited access to external epistemic resources offer fewer opportunities for challenging knowledge claims (Davis, 1999; Katz, 2000). This paper thus talks about two related concepts: * Indeed, a key component of AfL may be the disambiguation of the epistemic requirements of questions – in terms of understanding the question, its context, and the knowledge required to answer the question (Black & Wiliam, 2009). So the talk used LA to ‘get at’ the latter, as an instantiation of how an assessment grounded in a particular epistemology (the former) might look. Data for Epistemology? To give an answer to the question asked (rather than what was intended!) there is something interesting about how empirical data might inform our understanding of concepts – including epistemology. In his book ‘Knowledge and the state of nature’ Edward Craig considers the role of ‘knowledge’ as distinct from other forms of belief, foregrounding those circumstances under which the category ‘knowledge’ appears important.  Others have also taken such an approach. Both philosophy and understanding our normative practices matters here – ‘knowledge’ is something special (hence we study it), philosophy helps us tease out the edges, but in other ways so can some types of empirical evidence – including calls to our intuition, and exploration of people actually use knowledge claims – e.g. the reply “yes” to “do you know the time?” has been used by for example Andy Clark to support claims around extended mind. Philosophy provides a toolkit for thinking about concept definition (‘defining’ qua dictionaries, and qua clarity).  Sometimes, empirical data (particularly going beyond ‘our intuitions’) might also be interesting.  For example: * The great [Philosophy Experiments]2 website provides sets of thought experiments through which various philosophical assertions (and intuitions)are tested, and the respondent’s own – often conceptually contradictory – responses can be highlighted (in cases such as the ‘trolley’ experiment for example – have a play! They’re good). * [Experimental Philosophy (wikipedia)]3 may provide useful insight for conceptual analysis, and has (to a fairly minor degree) been explored in a variety of philosophical areas * To provide a salient data driven example, example, James Overton used text analytic approaches to explore the use of “[‘Explain’ in Scientific Literature]4“.  Taking 781 articles from Science a classification scheme was developed and tested to explore the use of ‘explain’ in that literature.  It doesn’t particularly matter to illustrate my point,but his findings indicated that “explanation and inference to the best explanation are ubiquitous in science, that they occur across a wide range of scientific disciplines, and that they are a goal of scientific practise. These explanations and inferences to the best explanation come in a diversity forms, which at least partially justifies the fragmentation of philosophical accounts“.  In particular, Overton points out that “text mining can enhance traditional conceptual analysis by establishing facts about word usage”…”These empirical techniques supplement traditional philosophical methods“ So, do I think that (big?) data defines knowledge?  Of course not.  Do I think it might help us explore some interesting conceptual – pragmatic – differences in the use of words, and the concepts those words relate to; well sure, maybe.  However, I think any such analysis would be alongside a philosophical analysis – the combination might well help us ‘tease out’ the edges of our concepts.