Last week I spent some time hosted by Stanford University’s Lytics Lab talking to people there, and in the tech world, about education, technology, search engines and epistemic practices. A really really big thank you to everyone who met up with me and shared their thoughts (and in many cases, drinks/food), it was a fantastically productive week. Particular thanks to the Lytics lab, especially Emily Schneider, Brian Perone, and René Kizilcec. It was fun! Here’s a photo of Palo Alto’s main attraction.
I already blogged about talking to people at Instagrok, Wikimedia, Google, and Fuji Xerox, this blog gives some stuff about chatting to people in the Lytics Lab about Coursera data (amongst other things), my next blog will talk a bit about a seminar Google gave on their experiences in delivering MOOCs.
MOOC Lytics – what do we want? err…
I blogged a while ago about what a MOOC research group might look like so it was nice to meet one! In fact, I even had a chance to give a talk on some of our work on detecting Exploratory Dialogue in Online Discourse (I revised it and will be redoing it at UW-Madison next week – so more on that then). I’m basically just going to bullet point some interesting ideas I chatted to people about:
- Need for experimental capacity from the outset (at least to do some basic A/B testing)
- Scope for ‘learners near you’ option – given enrolment numbers there might be potential to get people engaged in co-located study (of course, we’d lose data!) Interesting discussion in paper here (and it was René I was talking to – thanks!)
- We also talked about the potential of properly indexed discussions such that forums, etc. could be effectively searched (to avoid repetition, I believe the main platforms at the moment are not particularly good at this)
- Would having a typology of post types (in forums) be useful – e.g. Q and Q style posts versus discussion or note style, possibly with different ‘outcomes’, metrics, and social features for each
- If we’re interested in the discussion, what outcome variables external to the discussion are we interested in fostering? Reputation?
- What would topic modelling offer as an entry point for discussions – e.g. could we (effectively) topic model, create a topic cloud, and point users to that as a way to encourage them to penetrate the fog (I’m told Jon Huang has done some work on this).
- Topic modelling might also give some insight into what’s “going on for you” compared to “what’s the course talking about” as a whole (what topics are you missing, which big ones are you involved in (as a key player?)), etc.)
- If we can do interesting things with topic modelling, maybe we can also start to build in “suggested topics” by recency, popularity, or instructor targeting (it could help instructors target in fact!). We could also direct people to “the type of discussions you’ve contributed to previously” maybe even building in some collaborative filtering (although filter bubble risks abound!)
- I also talked to a few people about reputation management systems – in terms of accessing quality dialogue (and interaction) there are lots of interesting steps to be made…but some low hanging design based fruit lies in using reputation based tools – why invent a new fruit when you can make a smoothie. Of course here there’s an interesting question – what’s the incentive for user’s to interact to seed my paradata, why ‘like’. So the issue here is, if we want to use votes of some sort to assess user contributions (and start to think about what sorts of users there are – including voters v. contributors of various sorts) what is the incentive for users to do so? One option is to have a badging system of some sort, things like WikiTrust use cool methods to look at how long wikipedia edits ‘survive’ and then from that a) have a trust score for the edits and b) for users who write long lasting edits. I can imagine something like that based on discussions (maybe…). Another option would be to have a system where interaction with resources ‘bookmarks’ that resource (and perhaps also increases your own centrality in the network). These would all need thinking about and are all a bit quick n dirty, but are technically easier than some of the developing methods for analysing quality of discourse (for example) and have other affordances too.
So! Lots to think about, interesting things to go forward. Would welcome thoughts in the comments or on twitter, etc. I’ll add new things here if I remember anything else :-).