This year, GCSE and A-level results depend on teachers being unbiased. That's the real test

The nail-biting wait for GCSE and A-level results has a surreal quality this year, as students wonder what the outcome could possibly be of exams they haven’t sat because of the Covid-19 lockdown.

Most are assuming they’ll be awarded the grades they were predicted to have achieved by their schools, although it isn’t quite that simple: teachers are asked to both predict the grade and rank the students by the level of confidence the teacher has that each will attain that result. So you might be predicted a 9 at GCSE, then ranked first – most likely to achieve that 9 – which will enable exam boards to standardise results.

It’s obviously not great for last-minute merchants, but otherwise sounds as fair as it could be in an imperfect world. But it relies on teachers having a high, almost robotic level of impartiality in their evaluations – being able to look past external signifiers, like disruptive or dozy behaviour, and see the raw potential underneath. Also, of course, it relies on there being no bias, conscious or unconscious, against any particular group in any given subject.

It would be an absolute travesty if the existing inequalities of the system are exacerbated by this freak event

From personal observation, I’ve never encountered a professional group that works harder than teachers to see past the confidence and polish of assorted privilege, to delve into what’s going on underneath. Yet teachers don’t exist under a bell jar, but are part of a society in which prejudices still flourish; so it would be strange if they were entirely immune from them. I remember, too (just about), the fact that mine was the first year at university where finals were marked without sight of the students’ names: and it was also, by wild coincidence, the year that the number of women awarded first-class degrees surged. Some biases are so hard to shift that it’s quicker just to blindfold the judges.

Frustratingly, in the world of public exams where measurement is all, one thing has not been measured: how close to the actual results do teachers get, and are they more accurate with some students than with others? The Sutton Trust wrote a report in 2017 about access to universities for disadvantaged students, Rules of the Game , which found that in a world where over-prediction was the norm, the most able students from poorer backgrounds were systematically under-predicted. This had such an impact on their aspirations that the pipeline problem started here. These students weren’t even applying to the universities that have such well-documented problems with diversity, since based on inaccurate predictions, they reasonably concluded they didn’t stand a chance of getting in. However, the results are complex – low-achieving students from poorer backgrounds were more likely to be over-predicted – and one of the first observations of the report is that this entire area is “surprisingly under-researched”.

Professor Debora Price, now a gerontologist at Manchester University, did some work comparing GCSE predictions and results in the early 2000s, for one school – frustratingly, it was never published, as it was commissioned privately for that school. But Price’s fascinating observations were of systematic under-prediction for girls in sciences; under-prediction for behaviourally difficult but able students; under-prediction for ethnic minority students in every subject, particularly by certain teachers; and under-predictions for boys in art. Because Price and her colleagues had access to those same students as they went into their A-levels, they unearthed some fascinating behavioural effects.

First, she told me, “it emerged in focus groups that the kids knew that the teachers’ predictions were wrong. They knew which teachers were racist. The thing they weren’t very aware of were the gender biases. You might guess that from theory, because girls would have internalised those low expectations.” Second, they remarked on what Stanford researchers have termed “dynamic complementarities” – grades act as a signalling mechanism that affects behaviour. So students were strongly influenced by their predictions, but in pretty random ways. Some would see a low grade and give up, others would work harder. Some would get a high prediction and coast, others would gain confidence and rise to the expectation. So the overall inaccuracy of predictions was partly caused by the predictions themselves; say you’re predicted an A, you stop working, you get a B, now the prediction is wrong: but was it? Or did it cause itself to be wrong?

The pragmatist will say that right now, in the middle of a pandemic, is the wrong time to rip up a prediction and grading system that we’re relying on more than ever. Yet it would – will – be an absolute travesty if – when – the existing inequalities of the system are exacerbated by this freak event that has already hit families so unequally in other ways, in terms of health and income, with racial inequality particularly intensified. And let’s not forget how tenacious racial bias is in education: even by the time BAME students have smashed every glass ceiling to get to PhD level, research by Leading Routes last year found this staggering statistic: over a three-year period, only 1.2% of the 19,868 studentships awarded by all research councils went to ”black or black mixed students”, and only 30 of those were from black Caribbean background.

Schools and universities will have to think creatively about how to read the GCSE and A-level results of 2020. They can also seek to redress the biases that predictions have long introduced into our system – and to do so, we will at least be delivered a mammoth data set to work from. If this year sees girls underperforming in STEM subjects and BAME pupils performing less well than usual across all subjects, we will know definitively that there’s rot in the whole system. Ultimately, going beyond our current crisis, it may be that grade prediction as a waypoint for A-level choice and university admission should be abolished altogether.

• Zoe Williams is a Guardian columnist