Kalervo
Gulson,
University of
Sydney;
Claire Benn,
Australian National
University;
Kirsty
Kitto,
University of Technology
Sydney;
Simon
Knight,
University of Technology
Sydney,
and Teresa
Swist,
University of
Sydney
Algorithms are becoming commonplace. They can determine employment
prospects, [financial security]1 and more. The use of algorithms
can be controversial – for example, [robodebt]2, as the Australian
government’s flawed online welfare compliance system came to be known.
Algorithms are increasingly being used to make decisions that have a
lasting impact on our current and future lives. Some of the greatest
impacts of algorithmic decision-making are in education. If you have
anything to do with an Australian school or a university, at some stage
an algorithm will make a decision that matters for you. So what sort of
decisions might involve algorithms? Some decisions will involve the next
question for school students to answer on a test, such as the [online
provision of NAPLAN]3. Some algorithms support [human
decision-making in universities]4, such as identifying students at
risk of failing a subject. Others take the human out of the loop, like
some forms of [online exam supervision]5. How do algorithms
work? Despite their pervasive impacts on our lives, it is often
difficult to understand how algorithms work, why they have been
designed, and why they are used. As algorithms become a key part of
decision-making in education – and many other aspects of our lives –
people need to know two things: 1. how algorithms work 2. the kinds of
trade-offs that are made in decision-making using algorithms. In
research to explore these two issues, we developed [an algorithm
game]6 using participatory methodologies to involve diverse
stakeholders in the research. The process becomes a form of collective
experimentation to encourage new perspectives and insights into an
issue. Our algorithm game is based on the [UK exam controversy]7
in 2020. During COVID-19 lockdowns, an [algorithm was used to determine
grades]8 for students wishing to attend university. The algorithm
predicted grades for some students that were far lower than expected. In
the face of protests, the algorithm was eventually scrapped. [Our
interdisciplinary team]9 co-designed the UK exam algorithm game
over a series of two workshops and multiple meetings this year. Our
workshops included students, data scientists, ethicists and social
scientists. Such interdisciplinary perspectives are vital to understand
the range of social, ethical and technical implications of algorithms in
education. Algorithms make trade-offs, so transparency is needed The
UK example highlights key issues with using algorithms in society,
including issues of transparency and bias in data. These issues matter
everywhere, including [Australia]10. We designed the algorithm
game to help people develop the tools to have more of a say in shaping
the world algorithms are creating. Algorithm “games” invite people to
play with and learn about the parameters of how an algorithm operates.
Examples include games that show people how algorithms are used in
[criminal sentencing]11, or can help to [predict fire risk in
buildings]12 There is a growing public awareness that algorithms,
especially those used in forms of artificial intelligence, need to be
understood as raising [issues of fairness]13. But while everyone
may have a vernacular understanding of what is fair or unfair, when
algorithms are used numerous trade-offs are involved. In our algorithm
game, we take people through a series of problems where the solution to
a fairness problem simply introduces a new one. For example, the UK
algorithm did not work very well for predicting the grades of students
in schools where smaller numbers of students took certain subjects. This
was unfair for these students. The solution meant the algorithm was not
used for these often [very privileged schools]14. These students
then received grades predicted by their teachers. But these grades were
mostly higher than the algorithm-generated grades received by students
in larger schools, which were more often government comprehensive
schools. So this meant the decision was fair for students in small
schools, unfair for those in larger schools who had grades allocated by
the algorithm. What we try to show in our game that it is not possible
to have a perfect outcome. And that neither humans or algorithms will
make a set of choices that are fair for everyone. This means we have to
make decisions about which values matter when we use algorithms.
Public must have a say to balance the power of EdTech While our
algorithm game focuses on the use of an algorithm developed by a
government, algorithms in education are commonly introduced as part of
educational technology. The EdTech industry is [expanding rapidly in
Australia]15. Companies are seeking to dominate all stages of
education: enrolment, learning design, learning experience and lifelong
learning. Alongside these developments, COVID-19 has accelerated the use
of algorithmic decision-making in education and beyond. While these
innovations open up amazing possibilities, algorithms also bring with
them a set of challenges we must face as a society. Examples like the UK
exam algorithm expose us to how such algorithms work and the kinds of
decisions that have to be made when designing them. We are then forced
to answer deep questions of which values we will choose to prioritise
and what [roadmap for research]16 we take forward. Our choices
will shape our future and the future of generations to come. * * *
The following people were also involved in the research underpinning
the algorithm game. From the [Gradient Institute]17 for
responsible AI, Simon O’Callaghan, Alistair Reid and Tiberio Caetano.
And from the [Tech for Social Good]18 group, Vincent Zhang. Kalervo
Gulson,
Professor and ARC Future Fellow, Education & Social Work, Education
Futures Studio, University of
Sydney;
Claire Benn,
Research Fellow, Humanising Machine Intelligence Grand Challenge,
Australian National
University;
Kirsty
Kitto,
Associate Professor in Data Science, University of Technology
Sydney;
Simon
Knight,
Senior Lecturer and Director, Centre for Research on Education in a
Digital Society, University of Technology
Sydney,
and Teresa
Swist,
Postdoctoral Research Associate, Education Futures Studio, University
of
Sydney
This article is republished from [The Conversation]19 under a
Creative Commons license. Read the [original article]20.
Footnotes
-
https://www.afr.com/companies/financial-services/banks-warned-using-ai-in-loan-assessments-could-awaken-a-zombie-20210615-p5814i ↩
-
https://www.innovationaus.com/robodebt-was-technology-beta-testing-on-most-vulnerable-citizens/ ↩
-
https://nap.edu.au/online-assessment/research-and-development/tailored-tests ↩
-
https://theconversation.com/artificial-intelligence-holds-great-potential-for-both-students-and-teachers-but-only-if-used-wisely-81024 ↩
-
https://theconversation.com/online-exam-monitoring-is-now-common-in-australian-universities-but-is-it-here-to-stay-159074 ↩
-
https://www.theverge.com/2020/8/17/21372045/uk-a-level-results-algorithm-biased-coronavirus-covid-19-pandemic-university-applications ↩
-
https://blogs.lse.ac.uk/impactofsocialsciences/2020/08/26/fk-the-algorithm-what-the-world-can-learn-from-the-uks-a-level-grading-fiasco/ ↩
-
https://education-futures-studio.sydney.edu.au/education-futures-studio/workbench/ ↩
-
https://www.sbs.com.au/news/scott-morrison-warns-high-tech-race-must-consider-ethical-implications-for-human-rights/09268bbc-d7a9-4dd6-81f9-f531a59c887c ↩
-
https://www.technologyreview.com/2019/10/17/75285/ai-fairer-than-judge-criminal-risk-assessment-algorithm/ ↩
-
https://ffteducationdatalab.org.uk/2020/08/a-level-results-2020-why-independent-schools-have-done-well-out-of-this-years-awarding-process/ ↩
-
https://www.pwc.com.au/government/government-matters/education-tech-edtech-revolutionise-education-institutions.html ↩
-
https://www.nuffieldfoundation.org/static/2019/12/Ethical-and-Societal-Implications-of-Data-and-AI-report-Nuffield-Foundat.pdf ↩
-
https://theconversation.com/algorithms-can-decide-your-marks-your-work-prospects-and-your-financial-security-how-do-you-know-theyre-fair-171590 ↩