One of the tools I’ve been most impressed by in the student-research-support space, and one which I’ve had the longest interaction with is Instagrok.  Instagrok is a tool in which searches map keyterms to related concepts, and provides quick facts and quizzes on the search topic, alongside a space to save (‘pin’) facts to a journal space which can be developed into fully written free text assignments.  It’s one of the tools I’ve curated on my [Edu-search]1 and it’s one where I’ve considered scooping more than once as new features are added. So, it was great to finally put a face to a name and meet the instagrok lead – Kirill Kireyev and talk about what they’re up to, plans, possible collaborations, etc. First off it was interesting to chat about the ‘business’ side of things as an ed-tech startup, frankly Kirill could’ve made more money elsewhere with his machine learning skills, and it’s interesting working out the revenue streams for tools like instagrok (premium services, white labelling for a publisher’s content or offering ‘sponsored’ content, etc.).  Of course another aspect of that is trying to access research funds such as the NSF (which happily has given some money for evaluation – so focus group and some usability study work are in the pipeline). In terms of new developments, down the line instagrok will be able to link multiple searches to a particular project (at the moment each initial search receives its own journal, etc.) – making longer research, and ‘exploratory search’ type sessions much better supported.  Like [me]2, they’re also interested in working in bibliography support to encourage students to cite as they write, and get into those habits early – access to citation helps us understand credibility judgements (see the link). This is important because one of the things educators want from instagrok is to encourage reflection – on each ‘fact’ individually, and around the sensemaking activities between facts.  We can imagine various ways instagrok might support that including: * by creating concept map ‘unions’ (So a search for a & b would indicate which concepts were related to both a AND b, and which just a, or just b) * by extending the machine learning question generation to ask deeper questions of the content than simple gap fill exercises * by providing better analytics for teachers’ on their ‘classview’ option to indicate student concept mastery (by analysing how they’ve explored concepts, and which elements they pin from those concepts) I’m sure there are other interesting options too but those provide an indication.  Another tact would be to provide a ‘training grok’ in which we have prior knowledge of the documents from which the concepts (and thus, facts/quizzes) are being drawn by using a subset of pre-selected documents (I talked to someone at [google about this multiple document processing idea]3 later in the day) – from this we could explore whether students were just blindly taking each token as given, or actually exploring their relationships and making credibility judgements. For me this is particularly important because in some ways instagrok actually makes some of those credibility judgements a bit harder insofar as it strips some of the credibility indicators (presence of lots of advertisements, weird formatting, etc.) so these validity assessments are really important.  One option is to encourage students to click through to the full site, and another (corresponding) option is to encourage students to upvote/downvote pages (this could be gamified…although obviously we’d need to be careful about students gaming that system!).  The point with that sort of approach wouldn’t be to implement collaborative filtering (although that could be done, and it might be interesting to emphasise particular concepts), but rather to show a kind of ‘like’ feature on sources. So, exciting things in the pipeline at instagrok.  For me two exciting research opportunities are particularly interesting (although I’m not sure if they’ll fit in to my PhD!): 1. one around looking at exploratory search trace for multi-search projects (where each project indicates one ‘exploratory’ process and we can look at the trail of searches in light of student project ‘success’ or textual qualities) 2. another around how users ‘pin’ resources in a multiple document processing context and what that might indicate about their evaluative and credibility commitments/practices – whether they click through to check sites, look for corroboration of ‘facts’, etc. And I can imagine others around collaboration in search and so on – so I guess, watch this (and the instagrok) space!




  3. “Google Coursebuilder and Search Education”