This article was originally published on [The Conversation]1. Read the [original article]2.
Artificial intelligence holds great potential for both students and teachers – but only if used wisely
[Simon Knight]3, [University of Technology Sydney]4 and [Simon Buckingham Shum]5, [University of Technology Sydney]4
[Artificial intelligence]6 (AI) enables Siri to recognise your question, Google to correct your spelling, and tools such as [Kinect]7 to track you as you move around the room. Data big and small have come to education, from creating online platforms to increasing standardised assessments. But how can AI help us use and improve it? AI has a long history with education Researchers in AI in education have been investigating how the two intersect [for several decades]8. While it’s tempting to think that the primary dream for AI in education is to reduce marking load – a prospect made real through [automated essay scoring]9 – the breadth of applications goes beyond this. For example, researchers in AI in education have:
- developed [intelligent tutoring systems]10 that use student test responses to personalise how they navigate through material and assessments, targeting the skills they need to develop;
- investigated [automatic detection of affect]11 – including whether students are bored or confused – and used that to adapt materials they use; and
- built conversational agents or chatbots that can engage in discussions with students, even to [support student-to-student collaboration]12.
Artificial intelligence or intelligence
amplification? These are new approaches to learning that rely heavily on students engaging with new kinds of technology. But researchers in AI, and related fields such as [learning analytics]13, are also thinking about how AI can provide more effective feedback to students and teachers. [One perspective]14 is that researchers should worry less about making AI ever more intelligent, instead exploring the potential that relatively “stupid” (automated) tutors might have to amplify human intelligence. So, rather than focusing solely on building more intelligent AI to take humans out of the loop, we should focus just as much on [intelligence amplification]15 — or, going back to its intellectual roots, [intelligence augmentation]16. This is the use of technology – including AI – to provide people with information that helps them make better decisions and learn more effectively. This approach combines computing sciences with human sciences. It takes seriously the need for technology to be integrated into everyday life. Keeping people in the loop is particularly important when the stakes are high, and AI is far from perfect. So, for instance, rather than focusing on [automating the grading of student essays]9, some researchers are focusing on how they can provide intelligent feedback to students that helps them better [assess their own writing]17. And while some are considering if they can [replace nurses with robots]18, we are seeking to design better feedback to help them become [high-performance nursing teams]19.
Impacts on what we teach But for the use of AI to be sustainable,
education also needs a second kind of change: what we teach. To be active citizens, students need a sound understanding of AI, and a critical approach to assessing the implications of the “datafication” of our lives – from the use of Facebook data to [influence voting]20, to Google DeepMind’s [access to medical data]21. Students also need the skills to manage this complexity, to work collaboratively and to innovate in a changing environment. These are qualities that could [perhaps be amplified]22 through effective use of AI. The potential is not only for education to be more efficient, but to think about how we teach: to keep [revolution in sight]23, alongside evolution. Another response to AI’s perceived threat is to harness the technologies that will automate some forms of work, to [cultivate those higher-order qualities]24 that make humans distinctive from machines.
Algorithmic accountability
Amid [growing concerns]25 about the pervasive role of algorithms in society, we must understand what “[algorithmic accountability]26” means in education. Consider, for example, the potential for “predictive analytics” in flexi-pricing degrees based on a course-completion risk-rating built on online study habit data. Or the possibility of embedding existing human biases [into university offers]27, or educational chatbots that seek to discern your needs. If AI delivers benefits only to students who have access to specific technologies, then inevitably this has the potential to marginalise some groups. Significant work is under way to clarify how [ethics and privacy principles]28 can underpin the use of AI and data analytics in education. Intelligence amplification helps counteract these concerns by keeping people in the loop. A further concern is AI’s potential to result in a de-skilling or redundancy of teachers. This could possibly fuel a two-tier system where differing levels of educational support are provided. What does the future hold? The future of learning with AI, and other technologies, should be targeted not only at learning subject content, but also at cultivating curiosity, creativity and resilience. The ethical development of such innovations will require both teachers and students to have a robust understanding of how to work with data and AI to support their participation in society and across the professions.
[Simon Knight]3, Lecturer in Learning Analytics, [University of Technology Sydney]4 and [Simon Buckingham Shum]5, Professor of Learning Infomatics, [University of Technology Sydney]4 This article was originally published on [The Conversation]1. Read the [original article]2.
Footnotes
-
https://theconversation.com/artificial-intelligence-holds-great-potential-for-both-students-and-teachers-but-only-if-used-wisely-81024 ↩ ↩2
-
https://theconversation.com/profiles/simon-knight-207447 ↩ ↩2
-
http://theconversation.com/institutions/university-of-technology-sydney-936 ↩ ↩2 ↩3 ↩4
-
https://theconversation.com/profiles/simon-buckingham-shum-391142 ↩ ↩2
-
https://theconversation.com/the-future-of-artificial-intelligence-two-experts-disagree-79904 ↩
-
https://utscic.edu.au/projects/uts-projects/collaboration-analytics/ ↩
-
https://books.google.com.au/books?hl=en&lr=&id=GEK93NUHdXYC&oi=fnd&pg=PA383&dq=conversational+agents+to+support+collaboration+%22artificial+intelligence%22&ots=Rsn6bn8y9Z&sig=n352V2BotxK68Z2VjMNXpHhKY5A#v=onepage&q=tutorial%20dialogue%20as%20adaptive%20collaborative%20learning%20support&f=false ↩
-
http://radix.www.upenn.edu/learninganalytics/ryanbaker/STS-Baker-IJAIED-v15.pdf ↩
-
https://theconversation.com/rise-of-the-humans-intelligence-amplification-will-make-us-as-smart-as-the-machines-44767 ↩
-
https://link.springer.com/article/10.1007/s40593-016-0121-0/fulltext.html ↩
-
https://cs.stanford.edu/people/eroberts/cs201/projects/2010-11/ComputersMakingDecisions/robotic-nurses/index.html ↩
-
http://theconversation.com/can-facebook-influence-an-election-result-65541 ↩
-
https://www.theguardian.com/commentisfree/2017/jul/05/sensitive-health-information-deepmind-google ↩
-
https://www.researchgate.net/profile/Ido_Roll/publication/295681662_Evolution_and_Revolution_in_Artificial_Intelligence_in_Education/links/56db21ca08aebabdb412e15b.pdf ↩
-
https://www.youtube.com/watch?list=PLOF7tBP24lAdx-EE9OVY-ogbAIljBhicq&v=F13g_RsjMBA ↩
-
https://utscic.edu.au/algorithmic-accountability-learning-analytics/ ↩
-
http://www.theguardian.com/news/datablog/2013/aug/14/problem-with-algorithms-magnifying-misbehaviour ↩