I’ve just had the [‘Code Acts’ ESRC seminar series]1 pointed out to me which looks like a fantastic series around the sociotechnical factors involved in code. In a slightly indirect way these are issues that I’ve been talking about this year working on Learning Analytics, and previously when I was working on [my Nominet Trust blogs]2 around ‘outcome’/’output’ impact metrics, narratives around the drive to increase coding, and pragmatic (i.e. use oriented) issues around development of evaluations. The 3rd Learning Analytics and Knowledge conference (LAK13) had a theme of ‘the middle space’, the conceptual bringing together of the learning sciences, and analytics approaches. It sought to emphasise the importance of a mutual understanding in the design of analytic (often algorithmic) approaches to learning, and in [our LAK13 paper]3 we argued for the importance of considerations around the triad of assessment, pedagogy, and epistemology in bounding this middle space. In particular, we suggested that analytic approaches as assessments can reify particular stances towards pedagogy or (moreover) what it means to ‘know’ (epistemology). Since then some of our core discussions have been around machine learning approaches to the detection of educationally productive dialogue. Our interest is in the ways in which algorithmic approaches to the detection not only of key words or even topics, but co-construction of ideas, can be designed. Yet, a key factor at play does not relate to algorithms; rather, it relates to the purposes of the dialogue, for those who are involved in it. Much classroom talk, for example, is of necessity focussed on giving instructions, rather than co-construction of ideas. Moreover, the challenges of ‘feature detection’ across multiple utterances, and segmentation of utterances into meaningful chunks are sizeable, yet crucial to this endeavour. To return to the LAK13 paper, a fundamental aspect of these concerns is around how analytics are used, and how (or, for what reason) data is produced. So, contribution counts (e.g. on a forum, or Wiki) might be productive for some contexts, but not others. These are interesting issues for ‘code acts’, there is a balancing act to be had between ‘more advanced’ algorithmic approaches, and effective use of tools in pedagogic contexts. In parallel, there is a balance between trying to use what data is available (indeed, trying to create such data), and being driven by high quality pedagogy. Engaging with researchers from both sides, in the ‘middle space’ is key to driving our understanding forwards. So, this looks to be a promising series. And of course it relates to wider interests of mine than the above, for example my interest in [how (algorithms) of search shape people’s interaction with information]4, and recently around how they deal with lack of information/[testimony of silence]5, [diversity of views]6, and [testimony generally]7 as well as how we might [assess end user capabilities in search]8 contexts, and [support them using knowledge structuring tools]9.

Footnotes

  1. http://codeactsineducation.wordpress.com/seminars/

  2. http://www.nominettrust.org.uk/knowledge-centre/blogs/?filters=uid%3A3929

  3. http://oro.open.ac.uk/36635/

  4. http://sjgknight.com/finding-knowledge/2013/01/is-google-making-me-stupid-or-smarter-how-about-bing/ “Is Google making me [stupid|smarter]…how about Bing?”

  5. http://sjgknight.com/finding-knowledge/2013/05/when-no-answer-is-answer-enough-2/ “When no answer is answer enough”

  6. http://sjgknight.com/finding-knowledge/2013/02/personalising-for-diverse-results-diversity-aware-search/ “Personalising for Diverse Results – Diversity Aware Search”

  7. http://sjgknight.com/finding-knowledge/2013/11/society-of-the-query-conference/ “Society of the Query conference”

  8. http://sjgknight.com/finding-knowledge/2013/11/assessing-finding-knowledge/ “Assessing finding knowledge”

  9. http://sjgknight.com/finding-knowledge/2013/03/cscw2013-2-workshop-papers-texan-fun/ “CSCW2013 – 2 workshop papers & Texan fun”