13 August 2020

Grade moderation was the right thing to do – until the SNP triggered chaos across the UK

By

“All things in moderation” is a cliché. But it’s also quite an important principle if you want to design a fair system for awarding qualifications to around one million young people in the midst of a pandemic.

Across the UK, the coronavirus has seen exams replaced by a system built on teacher predictions. They are not a new idea, and we already know that they are generally inaccurate. Every year, teachers are asked to make predicted grades for A-level students as part of university applications. Research shows that just 16% of results are predicted correctly with 75% of estimated grades over-predicted.

There are clear drawbacks to letting overestimates stand as results. It’s obviously good news for the young people themselves who get better grades. But if you are due to sit your exams next year (or sat them last year), when you’re competing with these lucky people for jobs in five years’ time, you may well see the drawbacks.

And importantly, different groups are affected differently by inaccuracy in predictions. Historically, high attaining students from disadvantaged backgrounds have been more likely to be under-predicted. We can’t really be sure how a prediction system at GCSE and A-level would play out for such students, or for ethnic minority students for example, where unconscious bias is also an acknowledged risk.

This is where moderation comes in. Pass rates in qualifications are usually pretty consistent from year to year, reflecting the fact that a large group of 18 year olds is, on average, no smarter than the 18 year-olds of the previous year. So the overall proportions of students getting particular grades, or passing, is highly predictable. We can use this information to ensure that the grades we actually award closely reflect the grades we would have seen if it weren’t for the virus.

Moderation is also a standard feature of any assessment system. People of my generation will remember coursework as a part of GCSEs, where moderation ensured than your grade wasn’t affected by how harsh (or lenient) your specific teacher was. Schools will regularly moderate marking internally, to ensure all teachers have an accurate understanding of how their class is performing within the big picture. The process always ticks away in the background among examiners and markers, regardless of the exact format of the assessment.

But moderation has gone out of fashion this week – first in Scotland following the release of Highers grades last week, and then in England, ahead of today’s A-level results.

The trigger for this was analysis from the Scottish Qualifications Authority showing that the moderation process had resulted in students living in disadvantaged areas being particularly likely to have their teacher’s predictions downgraded. This, everyone concluded, was grossly unfair and required an immediate U-turn.

But this is not necessarily true. Rather, this is one of those examples where well-intentioned concerns fail to reflect what’s really going on.

The unavoidable problem with any moderation approach is that we can’t translate the system level fairness onto individual students. Like all the best policy problems, it’s completely impossible – if we knew for sure how people would have done on their exams, we’d probably use that information. We are having to make a best guess.

As a practical example, if you have a hundred students who are close to a grade boundary, how do you decide who is just above the threshold, and who just below? For better or worse, we normally take the exams to be definitive, even though we all know that it can come down to whether you had a good or bad day and whether your preferred topics came up. You can’t replicate how these vagaries would have unfolded.

So we don’t know if students from disadvantaged backgrounds being downgraded in Scotland at a higher rate than their better off peers reflects some great travesty; whether it reflects their teachers being overly optimistic; or something else.

What we do know is that the Scottish approach of making precisely no downgrades means that the overall results are entirely out of line with what would have happened without the pandemic. The pass rate for Scottish Highers for the past decade was between 75% and 79%. This year, it’s 89.2%.

We don’t know how that will affect different groups – some young people are winners, but which ones? There will be a great irony if this measure, taken in the name of fairness and equality, gives middle class Scottish teens a bigger boost that their less well-off classmates. Most depressingly, no-one seems to be even asking the question right now.

And so to England, where A-level results are out today, and GCSEs are coming next week. The regulator, Ofqual, has undertaken a similar moderation process, and 39% of A-levels have been downgraded. Thanks to data they had already published, we know that despite this process, the average grades received by different demographic groups were basically unchanged from last year, a good sign of fairness.

But in light of the Scottish debacle, the DfE pre-emptively made some changes. Exactly what those changes are is unclear. They won’t involve awarding students their predicted grades, but it seems that “mock exam results” has been added as a basis for appeal. It’s not quite clear how big a change this is. Much will depend on what minimum standards the mock exams have to meet, and what evidence is required of schools.

Because mock exams aren’t actually a thing. Different schools sit them at different times of the year. Some don’t use them at all. Most schools cut out topics they haven’t taught yet. It seems common to deliberately mark harshly to give recalcitrant teens a proverbial kick up the backside – but this varies from teacher to teacher. The DfE have given Ofqual the unenviable task of working out how to deliver this in practice.

Unfortunately, Ofqual have said it’s going to take until next week before they can even specify the criteria. The appeals process itself will take even longer. There are so many losers in this proposed approach it’s hard to know where to start.

First, schools and teachers, who now have to deal with a lot of appeals from their students and build up their evidence about how mocks were dealt with. And, as Laura McInerny has pointed out, they’ll also have to pay for each appeal if the normal process is followed, favouring the better off schools.

Second, while don’t really know how the new mock exam rule will play out, I would wager that middle-class schools are more likely to have done thorough mocks, collated the evidence, done the paperwork, etc. This would provide stronger grounds for appeal than for those from poorer, less well functioning schools. We started off with a system that treated both types of school student as fairly as in a normal year; it’s hard to imagine that being maintained.

Third, this seems to create total chaos in the university admissions system – with knock on implications for everyone. Fortunately, we’re starting with record numbers of students being accepted onto their first choice courses. But for everyone else, universities presumably have to keep places open for students pending appeals. So many students won’t know where they are going in September for a few weeks yet. The purgatory continues.

But more broadly, most applicants are holding two offers when they await their results – a first choice, and an insurance, in case their grades fall short. Normally, once grades are known, students immediately end up officially accepted onto one of these courses, or officially rejected from both. This finalisation of places is essential to the functioning of the clearing system – universities can’t start to offer out spare places if they don’t know how many places they have spare.

At some point today, people will start paying attention to the third alternative grade that young people have open to them. In addition to the moderated and assessed grade, and the appealed mock exam grade, there will be an Autumn exam series for young people to actually sit their exams. This sounds great, but with results not due much before Christmas we can pretty much guarantee that anyone taking this route is waiting until next September to progress to the next stage of their education – and that’s before you start to think about how students out of school for six months get themselves exam ready in the next six weeks. You’ve probably worked out the theme – it’s students from disadvantaged backgrounds again who are going to be least well placed to make use of this.

There were no perfect answers for replacing exams at short notice. Moderation was the best option available given the circumstances – but this week’s changes north and south of the border just add uncertainty and chaos to the system, while undermining efforts to protect disadvantaged students.

Click here to subscribe to our daily briefing – the best pieces from CapX and across the web.

CapX depends on the generosity of its readers. If you value what we do, please consider making a donation.

Ben Gadsby is Policy and Research Manager at Impetus, a charity that works to ensure disadvantaged young people succeed in school, in work, and in life.

Columns are the author's own opinion and do not necessarily reflect the views of CapX.