Lessons from the UK’s A-Level algorithm debacle


Let’s imagine that England plays against South Africa as part of the 6 Nations Rugby tournament. The match takes place on a rainy day. England wins 24 against 20. South African rugby players file a complaint against the result, which they perceive to be biased because English players are more used to train and play during rainy days. The Rugby federation finally consider that, indeed, the teams were unequally treated given the weather conditions, and considered that South Africa had been disadvantaged by 30%. Consequently, the Federation alters the results and records that South Africa has won against England by 26 against 24. From this hypothetical scenario, can that outcome be deemed to be fair? Has the altered result appropriately redressed an allegedly illegitimate bias?

The A-level algorithm debacle represents a similar, yet real-life, scenario. The Ofqual algorithm was aimed at addressing alleged biases made by generous teachers but has ended up bringing in even more unfair biases which students and staff members rebelled against. The Government’s sponsored Ofqual algorithm has been programmed so that grades were reduced by taking into account the school’s collective past performance and the average school size of the students. But this was not a neutral mechanism: downward adjustments systematically privileged some types of schools and disadvantaged others. The injustice was blatant, the popular uproar legitimate. Indeed, the altered grades were determined on the basis of some collectivist patterns rather than on individual performances, thus reinforcing path-dependent biases towards schools in deprived areas. The so-called “A-level algorithm debacle” was inevitable but not unavoidable.

Human biases cannot, and should not have been, corrected by algorithms. First, humans are, on the whole, fairly rationally-minded, and not as hopelessly biased as suggested by those who take Behavioural Economics too far. Were teachers wrongly biased in giving more generous grades than usually? Contrary to what the Government thought, teachers took due consideration of the context. The Covid-19 crisis and incredibly difficult learning experiences for students can be a legitimate and rational basis for a grade premium. Personalised grades based on each student’s merits and (crisis) experience might have been part of the grading standard of teachers, thus purportedly leading to enhanced grades. People usually want personalised treatment, and students are no exception. Human judgments may sometimes still remain the best way to assess individual performance.

Market forces would be able to cope with the problems associated with a possible over-supply of high A-level grades: high grades for the 2020 cohort would have been taken with a pinch of salt by future employers and universities. This is exactly what happened in France in 1968. The student revolution prevented normal examinations of the baccalauréat (the A-level equivalent), and thus, the thresholds for passing the exam were lowered so much so that the 1968 cohort was 30% larger than in adjacent years.[1] Employers knew that a 1968 baccalauréat was not exactly comparable to a 1967 or 1966 baccalauréat, and they took that into account in their hiring decisions. Back to the situation of England in 2020, the potentially inflated A-level grades (leading to increased passes) would have created a problem for universities (who would have to manage an increased number of applicants). But this could have been internalised by both universities and the job market without the need to preemptively correct alleged human biases of teachers. Consequently, the Government’s use of Ofqual algorithm was both ungrounded and inefficient from the outset. Also, trust in the teachers and free management of schools are essential, yet lacking in the present situation.

Beyond its irrational use to correct human rationality, the Government’s use of Ofqual algorithm was unfair. Using collective performance criteria, with past performance as a guiding principle, the Ofqual algorithm goes against the essence of algorithmic power: the power to individualise decisions based on the computational processing of Big Data.[2] Pricing algorithms are used on a large scale thanks to their unparalleled ability to individualise prices, on the basis of willingness to pay. Instant-time prices based on individual preferences allow for a larger number of market transactions to take place as compared to fixed or highly inflexible pricing. In the same vein, individualised advertising via algorithmic selection of digital ads maximise both consumers’ preferences and producers expected revenues. All in all, algorithms enable to process large amount of data in an optimised way under the condition that the algorithm-based criteria are indeed “optimal”. Those who set these criteria are human beings. Outside self-learning algorithms where robot-like autonomous decisions can be adopted irrespectively of subsequent human interventions, algorithms such as the Ofqual algorithm are the servants of their Masters: human decision-makers. Thus, algorithmic accountability pares down to governmental accountability especially when algorithms are instrumental to public decision-making processes. Consequently, when students and teachers protest against the algorithm-based decisions of the altered A-level results, they are wrong: only humans are responsible for having instilled some unfair collectivism into the measurement of individual performance. Only decision-makers of human flesh are responsible for discriminatory criteria inserted into the Ofqual algorithm. And only they have decided to create a problem at the stage of A-level grades rather than delaying the problem (and its associated opportunities) to the later stage of higher education and of the job market.

The perfect is the enemy of the good”, the old saying goes. The Government should have reflected upon this saying and should not have attempted to “perfect” human biases with human-designed algorithmic biases. The responsibility is dual but cannot logically be set on the algorithm itself. Algorithmic governability remains at its inception and ever-improving algorithmic transparency is needed but is under way. In the A-level algorithm debacle, what was missing was not so much algorithmic transparency (since Ofqual published a 300-page booklet) but rather humility (why distrust teachers and grant undue faith in algorithms?) and accountability (why blame the algorithm rather than its creator and instigator?). Algorithm-based decision-making processes are sensible, even more so when it comes to public decision-making processes impacting the lives of millions of students and families. Algorithms are not perfect, and neither are human beings.

 




[1] Eric Maurin, Sandra McNally (2008) Vive la Révolution! Long-Term Education Returns of 1968 of the Angry Students. Journal of Labor Economics, Vol.26(1), 1-35.

[2] Algorithms are defined as “a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer” in Concise Oxford Dictionnary (1999) 10th Ed. Oxford: Oxford University Press.


1 thought on “Lessons from the UK’s A-Level algorithm debacle”

  1. Posted 10/10/2020 at 08:35 | Permalink

    Excellent commentary but coming too late.

Comments are closed.


Newsletter Signup