Last night I joined The [Institute for Ethical AI in Education][^1]‘s one off virtual summit on the Ethics of AI in Education, centred on discussion of key ethical questions surrounding the use of AI in education and practical steps required to ensure all learners can benefit from AI whilst being protected against the risks. This was off the back of a recent report (to which I contributed) on the same topic “[Developing a Shared Vision of Ethical AI in Education: An Invitation to Participate][^2]. The event was organised over multiple days/sessions and unfortunately mostly past my bedtime so I only zoomed into my own session, which I was invited to act as a “catalyst” for, providing initial thoughts to seed conversation. Unusually, I wrote what I wanted to say in full prose…and then rewrote it as the election was announced. I’m pasting both versions below.# AI in Education: A Focus on Learning {.wp-block-heading} It’s a privilege to act as a catalyst in this working party on ethics in AIED today, and I mean that both to acknowledge my thanks to the Institute of Ethical AI in Education, and particularly Tom, for this invitation and as recognition of the privileged status we’re afforded. I wrote something a bit more upbeat, but I’ve been struggling today and reflecting on my impact in the world while watching the news and seeing a significant proportion – albeit it won’t be half – of those able and choosing to vote in the US will have chosen candidates (and a proposition) who will work against equalities, who will challenge hard won rights, and who will increase corporate capture. And we must reflect on that. We were asked to talk about the right to privacy and how technology might empower or disempower learners. But that right and those technologies are not in a vacuum. Privacy is one right among many, and our technologies sit within a broader social context. Data and algorithms are – to use the title of Ellen Broad’s book – made by humans. We make choices in who we include and exclude, in whether we track the impact of our interventions on different groups of learners, on the kinds of classifications we apply – and whether those are the right ones or not, and on who benefits flow to.  Privacy offers some protections. But privacy is not meant to stand in isolation. And privacy doesn’t speak to the choices – to the human decisions – in our design and implementation of technologies. That narrow focus on particular rights in isolation can lead us to “what are we allowed to do?”. But our question should be “what will produce the best outcomes, futures worth having, for us”, and the myriad of wider issues around that. In some of the work I’ve done with Kirsty Kitto we’ve been calling for a shift from focusing on frameworks, to promotion of practical reasoning around the kinds of dilemmas we face in balancing rights. Sets of ethical principles are useful, but it’s how we reason and take action that embodies ethics. Crucial to that is a recognition of the outcomes we’re aiming for, of the sorts of futures that support human flourishing, and ensuring we tackling the right challenges.   * How do we foster a learning technology ecosystem that places the right to education as central? * How do we develop practical reasoning around tensions in rights? * How do we understand the decision-contexts – social, organisational, and structural – in which data systems are deployed? * How do we augment human intelligence using data informed approaches for people?  AI and legal and ethical frameworks offer us tools for thinking. They’ll get us so far, but what we need is discussions about what we want, and to do that effectively we need to look at who is impacted, who is represented, and who is part of the conversation. # Original (and longer) draft {.wp-block-heading} My thanks to the Institute of Ethical AI in Education, and particularly Tom, for this invitation to act as a catalyst in this working party on ethics in AIED.  As a catalyst, my role is “to offer my own opinion and ideas on the topic. With controversy and diverse views welcomed” (lightly paraphrased invitation). In that spirit, I tried to be guided as though by a Youtube algorithm to the most extreme of my ideas…although I’m afraid I can’t offer any cat videos. Instead, I’ll begin by rejecting the premise, and reframing.# What do we mean by protecting learners’ privacy? {.wp-block-heading}# What are the core rights that protect and support learners, allowing them to benefit optimally from AI in Education? {.wp-block-heading} * Is learners’ data sufficiently protected by frameworks such as GDPR – or do learners need further rights? What safeguards will learners need from surveillance, intrusive assessments, and data being shared with employers (in the case of corporate learning)? We were asked to consider how legal frameworks protect learner’s data and their rights. It’s important to remember that underpinning these legal frameworks is respect for human dignity. Protection of (or empowerment for) privacy is a fundamental right because it is essential to autonomy and dignity, and to the limits we set particularly in the context of power imbalances. These imbalances may be now, or may arise; that’s why when considering data I instruct my students “You’re evil, a data science troll…what can you do?”, we do need to consider tyrannical uses. Legal protections will get you so far, but we also need to consider the ethical purpose to which our data is put, and the social license of the organisations working towards hopefully the shared purpose of learning. To give an example, I have experienced a student refusing to use a large tech provider’s services, not just because of privacy concerns, but because they were concerned about the company’s work in military applications. Our students have legitimate views about the kind of society they want to live in and support, and are not only concerned about the uses of their data, they’re also concerned about peripheral uses particularly by commercial entities who engage in applications they disagree with (military, private health, whatever). * How do we foster a learning technology ecosystem that places the right to education as central?  That means technologies that support teachers, understanding of learning processes (and research on this), and applications whose ‘legitimate purpose’ is education primarily (do we need more data and tech cooperatives in learning?) * How do we develop practical reasoning around tensions in rights? Our core rights are often in balance, and this is about human decisions, identifying where rights are in tension and selecting data, measures, and outcomes that matter. Data can help us monitor harms, such as wage discrimination or systematic differences in exam scores, but it can also be used for surveillance. We need to build our capacity for having these conversations and sharing them^1[^3]

How can learners be empowered or disempowered by data-driven

technologies? {.wp-block-heading} * Should there be a ban on using bias and/or opaque systems in high stakes situations? Is this possible? * In what cases could AI systems threaten learners’ autonomies? Where is the line between a helpful nudge and inappropriate manipulation? At what point do intelligent recommendations begin to determine learners’ futures? * To what extent can we ensure that humans remain responsible and accountable for learners’ outcomes? Technologies should not be empowering or dis-empowering us. Ellen Broad’s book (which, I’ll confess not to have read yet) “made by humans” points out these are human decisions.  In the last week I’ve seen courses on AI Ethics which focus solely on algorithmic approaches to reducing bias, but this fundamentally glosses over the wider context in which these decisions are made. Yes we should talk about the technologies, but it is exculpatory of wider systems to ignore: under-representations in health data; over-representation in prison populations; and homogeneity in HR hiring systems. These feed into our wider context, and yes it may be appropriate to take technical and legal approaches to manage the data implications of these issues, but it is absolutely fundamental that that does not distract from the wider issues they represent, and that technologies cannot fix those problems. * How do we understand the decision-contexts – social, organisational, and structural – in which data systems are deployed? We need a deep understanding of the role of data and other forms of evidence in decisions, and the decision context. A focus on the data/technology is a red herring. The a-level fiasco provides a good example, the focus of our criticism shouldn’t be on the algorithm, but on an educational system that systematically produces different outcomes (and a decision to produce an algorithm to closely reproduce these in a decision context of…well, pandemic!) * How do we augment human intelligence using data informed approaches for people?  Turning away from data certainly isn’t the answer, but we need to look at what our desired outcomes are. If we’re interested in learning, we should evaluate technologies based on how well they support learning, not on their accuracy (‘embracing imperfection’ as my colleagues called it), more explainable – but simpler – may be preferable, we should be able to interrogate the evidence for tools (an issue, for example, for remote proctoring systems), and that also means we need to build data literacy across stakeholders^2[^4]. AI and legal and ethical frameworks offer us tools for thinking. They’ll get us so far, but what we need is discussions about what we want, and to do that effectively we need to look at who is impacted, who is represented, and who is part of the conversation.