One consequence of the Covid-19 pandemic is that we are embarking on an extraordinary national experiment in the way young people achieve their exam grades in England; switching from a heavy reliance on externally set and marked written exams towards much greater trust in teacher assessment.
Our education system has never seen such a rapid assessment turnaround. It was made necessary by the shutdown of school and college campuses for most students and the cancellation of this summer’s GCSE, A Level and other public exams.
England’s public exam system is complex, fragile and expensive, requiring careful management. From its origins in the School Certificates established over 100 years ago it has grown massively and now serves multiple purposes and audiences.
Public exams provide young people with rites of passage, generate evidence of learning and are used as passports or barriers to progression and as labels of success and failure – both for students and for the institutions they attend. The stakes are high and results prompt national debate about standards, along with being used as measures of national progress and competitiveness. They can also reveal deep social inequalities while providing a veneer of objectivity for them.
The sheer number of exams set and sat, the high dependence on external terminal assessment, the level of grade differentiation, the amount of checking and analysis required to establish validity, the degree of moderation and standardisation needed to achieve consistency and the mass of performance data generated; all this needs close management. It’s not surprising that the machinery required to run the system is so complex.
Previous changes, such as the switch from A*-G to 9-1 grading at GCSE or the move from modular to linear A Levels, required careful planning well in advance. This time, the revolution is happening in the space of a few weeks, steered by Ofqual, the exams regulator. In effect, the very experts whose job it is to hold the superstructure of exams together are now tasked with showing how well we can manage without it. Together with the exam boards, they are having to design a new system almost from scratch while keeping the interests of students and their progression at the heart of all their decisions.
We now know in broad terms what teachers and centres will be expected to provide by June for every student entered for an exam. In most cases it boils down to two things: a centre assessed grade based on the evidence available and a ranking within each grade for all students entered for that exam.
This is a radical shift. The current system’s dependence on external assessment suggests a lack of confidence in teacher assessment whereas this new process requires a high level of trust in teacher judgement. This is very welcome and, once established, that public expression of trust is something which could be built on in future.
Everyone involved will want this process to be valid and robust so that this summer’s grades can be valued and respected across the board, including by the colleges, universities and employers to which students are planning to progress. But we need to ensure that it doesn’t disadvantage those students already most at risk.
Clearly, every student’s education has been disrupted this year, but not all will be impacted equally. There is some evidence that black and minority and disadvantaged students are more likely to have their grades under-estimated and go on to perform better in exams than predicted. Without exams this year, this under-prediction could disadvantage many.
Ofqual will be undertaking an equality impact assessment and this should take into account any evidence of systemic under-prediction and try to correct for it. And if the system can’t predict precisely what grade every single student would have achieved, the least colleges and universities can do is to be flexible about their entry requirements and generous in the additional support they provide for the Covid-19 cohort once they progress in the autumn.
Equality concerns also apply to the proposed additional autumn exam series. Opening this up widely could undermine this summer’s grades and lead to new inequalities, so it is important to clarify exactly who this is for. Rather than being offered to anyone who is dissatisfied with their result, this opportunity should be for those candidates who couldn’t be assessed in the summer or whose progression is in serious jeopardy. The focus in the autumn should be on supporting students to move forward and succeed on their new programme rather than looking backwards and spending time improving on a grade they achieved in summer 2020.
Two further issues should be considered if the process is to be as manageable and fair as possible:
First, combining all the gradings and rankings coming in from colleges and schools nationally requires some moderation and standardisation. We know that this will take into account three main elements: the previous results in each centre, the expected overall national results and each student’s prior achievements – generally the strongest predictor of results. There needs to be maximum transparency about how the national statistical model for adjustment will balance these factors. While the global pattern of results may be fairly predictable, what matters to each candidate is that their personal results represent their achievements as fairly as possible. This is particularly tricky when applied to post-16 GCSE re-takers whose progress is less easy to predict because they are not a whole age cohort and are more ‘bunched’ around a few grades.
Second, large centres need help with ranking very large numbers of students. It is reasonable to expect teachers to rank the students they teach. Without this, it won’t be possible to create a sliding scale to which any statistical adjustment can be applied. But ranking every individual candidate on their own ranking point in a centre where there are several hundred in a single grade is neither practical nor more accurate. Take GCSE maths; around 100 colleges enter over 500 students and some more than 1,000. In comparison, England’s 3,500 secondary schools enter an average of 150 Year 11 students each for GCSE maths. It would make sense to limit the number of ranking points per grade and to allow centres to place some students on the same ranking point. After all, in an exam, several students can achieve the same score.
This process, and the issues it raises, reveal a system which is very sensitive to minor changes. Because the stakes are high, grade boundaries become cliff edges and small differences in outcome can have life-changing consequences. But should the distinction between grades 3 and 4 or 8 and 9 at GCSE, or between an AAB or an ABB at A Level really be so critical?
This year’s unexpected turnaround shows that major system change is possible. Once we get through this, we should take time to consider whether we really want to return to things exactly as they were. We could have a debate about what we’ve learnt from 2020 and the benefits of increasing trust, reducing the stakes, spreading the risk and dialling down the pressure. We might well conclude that simplification would be in the best interests of students, their teachers and their places of learning.
AoC response to the Ofqual consultation on grading A Levels and GCSEs (April 2020)