The UK’s A Level grading fiasco has cast algorithms in the limelight for all the wrong reasons. The algorithm was intended to assign students with grades in the absence of actual exams by taking account of their school’s historical record. When the results poured in, so too did the complaints. 40% of student grades fell below what teachers had predicted. Public outcry ensued, the government U-turned and teacher-assessed grades were deployed after all.
The whole affair has been deemed an algoshambles. It’s not the only one of its kind; as automation takes hold all around us, high-stakes decisions are increasingly being deferred to computers, often at the expense of the most vulnerable in society.
EdTech is riding the wave of digital innovation; algorithms are informing all manner of educational matters from schooling schedules to students’ learning pathways and, except for those U-turns, exam grades. We need human-centred principles for the design and implementation of algorithms – here are seven to get us started:
1. Transparency
The development path of every algorithm should be traceable. An end user should know who created the algorithm, what process they undertook and what tools they used.
2. Explainability
The behaviour of the algorithm should be easy to explain – every decision the algorithm makes should have a clear rationale. Mathematical complexity must never be a cover for abstract choices.
3. Proximity
The assumptions that undergird algorithmic behaviour should be informed by the context of its end users; their individual needs and preferences, their cultural norms, their hopes and fears. Algorithms must never be allowed to scale their judgements absent of this local context.
4. Expert input
Human judgement should be sought in the early stages of creation. The lure of purely data-driven approaches such as machine learning must never overwhelm the role of expert humans in shaping intended behaviours.
5. Human overrides
An algorithm should augment rather than displace human judgement. End users should have the facility to override algorithmic judgements where they conflict with their intended objectives.
6. Global maxima
Algorithms that forge predictions based solely on historical data carry the risk of perpetuating past injustices by resigning users to a ‘local maximum’. Algorithms must enable new visions of the future that escape the tyranny of the past.
7. True personalisation
Algorithmic judgements must be fair to the individual level. It is not sufficient to make recommendations, or evaluate overall performance, based on the ‘average’ user. There is no such thing as the ‘average’ human, after all. Every individual must be accounted for, particularly those ‘outliers’ for whom decisions carry the most extreme consequences.