My work at the moment focuses on student writing practices,which I’m particularly interested in linking to other features of learning such as epistemic cognition. There are a rich set of methods in exploring epistemic cognition, including self reports (surveys items, interviews), think-aloud protocols during search and information processing, analysis of group-talk, etc. One particularly interesting line of work has explored how students interrogate and integrate claims across multiple sources, particularly where the sources vary in credibility and in their stance on particular key claims. This work has typically involved asking students to rate the trustworthiness of the sources, or/and to take a pre/post-test to assess which claims the participants take in as facts from the various sources. In some cases, tasks have also required students to actually write a summary or synthesis themselves, exploring how which information they include, how they cite it, and whether they include any evaluation. My PhD work used this sort of approach.
‘Bad Summary’ paradigm
I’m now thinking about how we might build on this paradigm to use rich sources, but retain a relatively controlled environment. One method I’m considering is the use of what I’m describing as a ‘bad summary’ task – a task in which students are asked to improve a draft piece of work, containing known deficits. This sort of ‘error correction’ approach is not uncommon, I certainly used it (or versions of) when I was teaching high school (e.g.,), and more recently I’ve been using iterations of draft texts to illustrate the development of writing over revisions.
I’m imagining providing students with a source-based text (ideally drawing on > 1 source text), with deficits such as:
- Rhetorical features (at a sentence, and global level) – e.g. that claims are stated without further structure, that an argument is/is not presented, that rhetorical moves are poorly expressed, etc.
- Local and global cohesion – e.g. that the text lists claims rather than integrating them, or/and that individual paragraphs are poorly connected with no overall thread
- Missing or inaccurate content and detail – e.g. that individual sources are heavily relied on, that key claims are missing or obscured
- Surface features (grammar, spelling, style)
- Evidence used (e.g. citations) – e.g. that citation of quotes, ideas, and specific claims are missing
Doing this, we could probe whether students identify, for example:
- where a summary fails to synthesise including resolving conflicting claims from sources, or over emphasis on particular sources – a key issue of evaluation and source resolution
- where source information is omitted (e.g. on specific claims, ideas, or quotations) – a key academic integrity issue
- where argument structures can be improved or added, etc. (at a local and global level)
These base bad summaries should be improvable. That is, with revisions amending the target features (which should be defined), they would significantly improved. The easiest way to do this is probably be degrading a high-quality text. The problem is – they’re pretty hard to write! I suspect this is because text-features typically correlate – texts with poor rhetorical structure are unlikely to be high quality in all other regards; often texts need simple re-writing, rather than minor surface changes.
Assuming a set of these could be created though, we can imagine a study in which students are asked to assess the baseline text, and then asked to implement their improvements and asses their revised text. The benefits of this approach for student learning are:
- If it involves self-assessment, it engages the students in applying assessment criteria through evaluation of the initial text and their own improvement of that text
- Authentic perspective on student’s capacity to evaluate a draft text, and ability to action their feedback – this is a pretty common task, to work on somebody else’s draft text, or to try and ‘fact check’ and modify an existing resource. It also forces students not only to give feedback but to think about how that feedback is implemented (i.e., by actually implementing it).
- Interventions could be run – e.g. providing bad summary texts with different types of markup to different groups – to investigate their impact (quantity/quality/targeting of) on revisions
Call for examples
I’m working on developing some examples in this case, but given I’m interested in working across multiple domains it’d be fantastic to work with people with expertise in other disciplines who have interest in this sort of case (e.g. science literacy cases).
E.g., we can imagine the instruction: “A colleague (or intern/peer/manager?) has drafted a summary of some key documents. You should improve their draft, and fix any problems that you find.”
Where students are asked to address a summary of 3 texts in which direct quotes are present without quotation marks, alongside poor paraphrasing and referencing, bullet-pointing (rather than synthesis), and an over-emphasis on one or two articles (before even throwing in issues of evaluation).
Does anyone (/both of you reading) have any sample cases that would fit this paradigm? I’d love to collaborate!
The idea is that by having prior knowledge of the sources, and the initial document, we have students working from a known baseline to address known deficits. As such, understanding (a) whether text with known issues has been modified, and (b) where idea units are being sourced from (in the initial, and modified text) should be relatively easy assuming that the source texts are distinct enough from each other, and use identifiable linguistic features. That could be via manual analysis, but the ideal is that we’d be able to automatically detect some of the properties of the revisions – the benefit of knowing the baseline text, and the text deficits.