What does it mean to think that a pupil has made progress? What would it look like if a group of pupils all made the same amount (relatively) of progress? Thinking about these questions reveals a lot about what we value as a society, while the ideal might be to imagine a generally higher level of achievement for all the reality is that much of the discomfort in our educational system seems to hinge on the need to rank people, differentiate between candidates, and arguably maintain privilege.
The problem is, if what we want to do is rank candidates then two concerns come in to play
1) Grades from year to year will not be comparable, what it takes to get an ‘a’ in one year won’t necessarily be the same another year
2) Rankings reduce criterion based systems such that achieving an ‘a’ doesn’t necessarily mean the same skill or knowledge base has been achieved by all candidates, and means grade boundaries will be arbitrary (rather than based on some well conceptualised skill progression – although of course this is still somewhat arbitrary)
So, why might grades improve?
Well one answer is that the tests are getting easier. Another is that they’re more applicable/better defined now, and thus less arbitrary variance occurs – this of course also means the exams should be easier to teach for. When accountability systems depend on pupil grades, this is a big motivator for teachers and schools. Making exams ‘fairer’ by making it clearer what the expectations are seems reasonable, particularly when it comes to subject knowledge rather than fairly general skills. We might have some concern though that more flexible exams might allow us to really test pupil ingenuity, and their ability to apply skills more broadly than in tightly controlled exams (with tightly controlled subject matter) – this is not an argument for a return to the old system, but it might be one for open book exams, or the use of internet in exams as in Denmark (I talk about this a bit here)
Here’s another problem though – if we have criterion referencing, then teachers become familiar with the exam system over time, they become better at teaching those elements which are likely to appear on the exam, and there might be other reasons pupil progress improves. One solution would be to make exams harder (see above), but the above 2 issues would still apply. At the moment the complaint is that it’s hard to distinguish between candidates – but if we’re just assessing capabilities in the subject according to the predefined criteria, then this isn’t a valid complaint. That said, there might well be reasons we want to rank and differentiate between students, the question is whether one exam can serve both to accredit learning on one predefined set of criteria, and to rank pupils, and to hold teachers accountable. In a slightly different context this issue has been raised (although I forget where) – one exam system is probably not fit for the multiple purposes for which it is being deployed.
Would it be possible to have subject based criterion referenced assessment alongside a more general norm referenced assessment? Would that be useful? Would it address the concerns above? Even if it does, is it a good idea (I’m not sure)?
As an aside, we should also be asking the question:
“Other than the assessments changing (and the commensurate pressure on teachers to teach to the test), why else might grades improve?”
The answer to which, well teachers have got better, at predicting and understanding the exams and criteria, at motivating pupils to put more effort in so that they’re prepared to try harder, and at teaching them new skills so they’re more capable of learning better…it’s obviously more than that, and less than that in many cases. And we can ask whether or not the current system supports teachers and children in developing the current generation’s skills and knowledge, or whether another system would support them better. There is, though, a story to tell – and going for the “dumbing down” response just doesn’t wash without consideration of the range of factors, and an awareness that any research into ‘dumbing down’ is inevitably a) politicised, and b) incredibly hard to conduct (how do you control?!)
More soon…hopefully (I’m trying to push out a load of my ‘draft’ blogs while I’m in my end of holiday pre-return lull)