Our Director of Education, Dr Junaid Mubeen, spoke with Dr Wayne Holmes, a lecturer in Learning Sciences and Innovation at The Open University. Wayne’s research is rooted in the application of AI to both enhance and further understand learning, and the ethical and social implications of AI applied in educational contexts. In a wide-ranging conversation, Junaid and Wayne explored some of the thornier questions around the use of Artificial Intelligence in Education, and the unintended consequences they often lead to. Wayne doesn’t pull any punches, which makes for a frank and enlightening exchange. Here are some of the highlights from the conversation.
Junaid: You ran the first ever workshop on the ethics of AI in Education last year. Why did it take until 2018 for educators to address this issue?
Wayne: I don’t know, really. The ethics of AI is big business and is being looked at in depth in other spaces like healthcare and autonomous driving and just recently, the EU published its ethical guidelines. The question is why this has not yet happened in education (I’ve only been able to find a couple of academic papers). I think in the UK it’s because AI in education is still pretty small; people are only just beginning to think about these issues. Those working in AI and Education are not yet confronting what happens when AI clashes with Education.
J: Let’s get into that. You’ve mentioned previously that there are specific ethical considerations at the nexus of AI, data and Education. So what are the most pertinent ethical questions educators should be looking at?
W: As far as AI in Education goes, the conversation only ever focuses on data, issues like security and privacy, rather than educational aspects like pedagogy. When we make choices on how to teach a student, it has practical consequences, but also ethical ones. For example, personalised learning is lauded despite the fact that it can mean so many different things. AI addresses personalisation of learning pathways, which is really just automating the teaching to address students’ strengths and weaknesses.
J: But that sounds reasonable – why not tailor to the needs of the individual student?
W: I’m not saying it’s wrong, just that it seems to be the only focus of AI. What we’re not addressing is that students are still being led to the same destination which includes exams. AI is just making the journey more “efficient” and by doing so it perpetuates existing systems for learning. The AI should enable the student to define their own destination. Let me offer a metaphor; I heard an AI company talk about their intelligent tutoring system as being what Uber is to a bus. But here’s the thing: I don’t get an Uber because I want a nicer or more efficient route – I get an Uber to get to a different destination. If the destination doesn’t change, I may as well take the bus.
J: For solution providers to change the destination, to uproot the conventional wisdom of exams, they would have to engage at a policy level to bring about system-wide reform. Isn’t that an unreasonable ask?
W: If Google or Facebook had taken that attitude, they wouldn’t exist. They have decided that rather than doing A, B and C better, they should look to do something completely different.
J: So there is an imperative for innovators to disrupt? That sounds quite dramatic in its scope.
W: I am not criticising AI companies, but simply suggesting that they look at the true potential of their technologies. The technologies are dramatic in their scope. I recognise that may not be the best way to turn a profit. I don’t like the word ‘disrupt’ but we should ask how we should bring these technologies to bear to make things better.
J: You and I first collaborated on iTalk2Learn, which explored ideas around next generation intelligent tutoring. We were looking at using speech recognition to detect and respond to a child’s emotional cues, as well as exploratory learning environments. Why haven’t these ideas gone mainstream?
W: Because it’s difficult, and so takes lots of effort and resources. As a company, you have lots of stakeholders to serve – employees, shareholders and so on. So the question is whether you believe you have the resources to radically improve education, even if it might upset policymakers. It may be more tempting to just sell something more straightforward. It’s worth mentioning though that there’s a whole country, Finland, that uses dramatically different approaches to education and yet they still perform well on more conventional measures like PISA.
J: You would think ‘Big Tech’ companies would be able to invest in research and development and crack some of these problems. Yet we’ve seen the likes of Facebook and AltSchool get it so hopelessly wrong. Why is that?
W: If I knew that, I wouldn’t be an academic – I’d be earning millions! At a UNESCO conference I observed that while the AI experts had a profound understanding of AI and a superficial understanding of education, the educators knew education but didn’t understand the implications of AI – they just saw these technologies as being the next step, not unlike interactive whiteboards once were. So we need to bring both groups together. We need the ideas, expertise and excitement that computer scientists bring, but we also need educators to give us the point of it all and to ensure we don’t simply replicate what we already have.
J: AI systems are becoming very complicated. A lot of machine learning applications are based on black boxes that have complex decision-making processes. How can we bring more transparency to ensure educators have oversight with these tools?
W: Many AI experts have recognised that Machine Learning has its limits – even the most sophisticated systems don’t possess the intelligence of a 2-year-old. And when we look at systems like self-driving cars, they’re not just using Machine Learning. They also depend on rule-based systems. For example, they don’t have to kill half a million people to realise there’s something wrong with that because humans have already programmed these rules in. So that’s probably where intelligent tutoring systems will go. Machine Learning hits the headlines, but we probably need a combination of approaches and we never know what the next big thing will be. It’s important we don’t fetishise Machine Learning – a lot of companies exaggerate the extent to which they use these approaches.
J: Let’s circle back to teachers. AI systems come with the promise of freeing up teachers by automating marking and other mundane tasks. Isn’t this reason to be hopeful?
W: Well, automation comes from the manufacturing industry. Education is not manufacturing – it is based on human values. Automation might help to make teaching “more efficient”, but education is so much more than that – it is social, it’s about learning to be collaborative, critical and creative. It’s not always clear how AI supports that. Let me take you to parts of the world where class sizes are upwards of 150, or even more extreme scenarios where there are literally no teachers. Is there not a case to give students the richest possible digital learning experience, which is better than nothing?
Of course those students would benefit from having digital content. But we can’t reduce learning to that; we lose too much by removing the human interaction. And AI has shown limited capability in humanising the learning experience. Many AI systems that claim to help teachers really do nothing more than give them an analytics dashboard. Companies are less interested in how to implement those tools in ways that truly enhance (rather than replace) teaching. It would be exciting to build tools whose sole purpose is to improve teaching – that’s very different to having students work one-to-one on their devices. One of the smartest uses of AI that I’ve seen is the Smart Learning Partner, from Beijing in China, which matches students to human tutors based on what they want support on. All the tutors have been scored by other students and students can choose the one they want. It may one day be possible to replace human tutors with AI-driven virtual tutors with different skills and personalities, but we’re a long way from that.
J: Let’s finish on efficacy. Throughout this conversation, you’ve planted the idea that we should be challenging the success criteria for education. Efficacy is usually based on the standard metrics of test scores. So is there a risk it is amplifying existing practices because ultimately innovations are measured by the same success criteria?
W: Yes. We measure what is measurable, but what we can measure can be so limited. If an intelligent tutoring system, or whatever else, is judged on test scores alone, it will be designed to optimise test scores, and will ignore the other key components of teaching and learning. But I have to say, even with standard metrics, there are very few examples of robust efficacy studies in EdTech, let alone in AIED. And even the most robust studies are designed in a way that the conclusions are so narrow.