[]1 REVIEWTM Software
– A tool and a strategy to improve assessment I wrote this with
[Darrall Thompson]2 recently for a Pearson ACODE award (for which
REVIEW was ‘Highly Commended for Innovation in Technology Enhanced
Learning’, although other very worthy projects took the prize), and
think it’s worth sharing! Darrall also put together this fantastic video
for the application: 1. Student Success Single-mark or grade
indicators are commonplace in describing student performance, leading to
a tendency for both students and staff to focus on this single
indicator, rather than more nuanced evaluation of a student’ knowledge
and attributes (Thompson, 2006). Moreover, such assessments cannot
provide feedback regarding the development of knowledge and other
attributes across disciplinary boundaries and years of study. The REVIEW
software is an assessment tool designed to bring both summative and
formative feedback together, over time, and across disciplinary
boundaries. The tool has been developed to enhance learning through
three modes of action: 1. Providing a self-assessment space, to
encourage students to reflect on and articulate their perception of
their own achievements, which they can compare to tutor-assessments that
target written formative feedback at the criteria in which there is the
largest gap between the self-assessment and tutor-assessment. 2. To make
explicit the association between: assessments (including exams);
graduate attributes; the marks given; and specific feedback (such that
two identical ‘grades’ can be composed from multiple different
criterion-level assessments). 3. Through ‘2’, to act as a change agent
in developing and shifting assessment tasks and criteria towards
constructive alignment between individual assessments – perhaps most
notably examinations – and higher level graduate attributes. Led by
researchers at the University of Technology Sydney (UTS), the tool has
been evaluated against these objectives over a period of 12 years. Early
evaluations (Kamvounias & Thompson, 2008; Taylor et al., 2009; Thompson,
Treleaven, Kamvounias, Beem, & Hill, 2008) indicated that (1) based on
student feedback surveys, they had generally positive experiences in
using the tool, specifically that it enhanced the clarity of the
assessment expectations, and (2) based on instructor reflections and
analysis of unit outline changes, the tool was a driver for change in
developing explicit assessment criteria and constructive alignment
between assessments and graduate attributes. Perhaps most significantly,
based on 4 semesters of REVIEW self-assessment data, analysis indicates
enhancement of student learning through calibration of their
self-assessments such that they become more aligned with
tutor-judgements over the semesters (Boud, Lawson, & Thompson, 2013), a
finding replicated over a shorter period, with varied cohorts, elsewhere
(Carroll, 2013). In addition, “There are early signs in student feedback
that the visual display of criteria linked to attribute categories and
sub-categories is useful in charting progress and presenting to
employers in interview contexts. Employers take these charts seriously
because they are derived from actual official assessment criteria from a
broad range of subjects over time” (Thompson, Forthcoming, p. 19)
2. Scale of TEL Impact REVIEW’s impact is seen across whole cohorts of
students, in multiple disciplines, and institutions. From initially
being deployed at UTS, the REVIEW software has been adopted by: The
University of New South Wales (UNSW); Queensland University of
Technology (QUT); The University of Sydney; and – in developing pilot
work – schools in both the UK and Australia. This external adoption
forms a part of REVIEW’s impact at scale, demonstrating its reproducible
impact on enhancing student learning, and (as we discuss further below)
providing a sustainable model for its continued development. Within
these institutional contexts, the tool has growing adoption amongst
academics. Indeed, its impact can be seen in the – largely organic –
growth, with coordinators finding the tool helpful to engage their
tutors with the tool to support a unified approach to student feedback.
For example, at UNSW, a 2011 trail of REVIEW with four courses, has
expanded to 160 courses using the software each semester across three
faculties, “[i]t was found that the use of Review improved learning,
teaching and assessment administration and reporting outcomes” (Carroll,
2016). REVIEW facilitates a ‘bottom up’ approach to assessment
innovation (Cathcart, Kerr, Fletcher, & Mack, 2008; Thompson, 2009).
That is, rather than academics developing individual approaches, or
required to align their existing unit outlines and activities within
them to proscribed graduate attributes, they use a facilitative tool to
make explicit the aims underlying their assessment tasks. This process
often leads to more scenario-based questions to test the application of
knowledge in examinations (Thompson, Forthcoming). Because of its mode
of action, instructor (or departmental) adoption has an impact on all
students enrolled in their courses – as such, all students in classes
which use REVIEW are impacted by the increased focus on making
assessment criteria explicit, articulating relationships between
assessment criteria and graduate attributes, and drawing constructive
alignment between these factors. Moreover, our experience indicates that
students do choose to engage with the self-review components of
REVIEW (generally over 2/3 of students, an uptake we intend to
investigate more formally). As a result of the benefits of the
self-reflection process – highlighted above – attempts have been made to
incentivize student engagement further, by providing reward or penalty
for engagement and through making engaging materials that articulate the
purpose of the self-assessment process. “In my experience, the most
successful method has been an introduction by the unit coordinator in
combination with tutors who genuinely value the student gradings and
demonstrate this feature by marking a piece of work in a large lecture
context. Involving students in this live marking activity engages both
them and the tutors in further understanding the criteria…” (Thompson,
Forthcoming, p. 12) 3. Capability Building & Organisation The
REVIEW tool is explicitly targeted at building capabilities; both of
students, and the academic staff and tutors who work with them to
develop their graduate attributes. As such, REVIEW is targeted at
building capability in criterion-based assessment, and understanding of
the application of these criteria – by both students and assessors –
towards high-level graduate attributes, which the system foregrounds
thus facilitating change favouring constructive alignment between
assessment tasks and these goals. The system has won adoption through
its ease of use and range of visual feedback, alongside – for
instructors, and administrators – a range of reports offering value for
course mapping, the benchmarking of sets of tutor assessments (e.g. to
explore discrepancies in tutor marking), accreditation and assurance
purposes, and monitoring changes in subjects over different deliveries.
The reports are then used as discussion tools, to support professional
development between tutors and instructors (Thompson, Forthcoming, p.
16). In addition, the software has facilitated course-reviews, through
providing reporting on the mapping of assessment criteria to graduate
attributes. These reports can, for example, reveal that some Course
Outcomes are not in fact mapped to assessments, again opening discussion
around assessment and outcome designs (Thompson, Forthcoming, p. 19).
Some impetus for use of REVIEW has come in one Faculty from the mandate
that graduate attribute development be reported on by course teams, with
REVIEW validated as a system to provide such evidence. Moreover, though,
engagement goes beyond ‘box ticking’. The software facilitates and
enhances an approach to criterion-based and self-assessment, but its
implementation has been developed with a set of resources to guide
academics in creating discipline-specific language to describe intended
learning outcomes and their application to assessment tasks and
criteria. It is thus a key facilitator of formative assessment both as
an agent for change, and in terms of its scaffolding capabilities –
emphasising criterion-assessment, and targeting feedback at those areas
in which a student’s self-assessment is least accurate. A key
facilitative feature in the software has been the ‘visual grammar’ which
threads through course documentation and the REVIEW software. In DAB a
memorable acronym, colour-set, and symbol has been developed to
foreground each category to staff and students: CAPRI. CAPRI comprises
the graduate attributes in the faculty: Communication and Groupwork;
Attitudes and Values; Practical and Professional; Research and Critique;
Innovation and Creativity. These attributes are then foregrounded in
REVIEW, which is used to collect marks in the background from the
day-to-day marking of assessment criteria linked to both Course Intended
Learning Outcomes and the five CAPRI categories. 4. Sustainability
Top-down directives about graduate attribute integration often involve
onerous documentation, alienating busy academics while having minimal
impact at the student level. For improvement in feedback to occur,
instructors need to be given timesaving strategies and support. Software
such as REVIEW must be integrated into the main university system to
save time in assessment and reporting processes. The timesaving aspects
and ease of use of REVIEW together with its perceived value to staff and
students caused it to spread by osmosis, leading to its
commercialization by the University of Technology Sydney in 2011.
University technology divisions require highly secure systems that do
not compromise their existing services. There are a number of approaches
for web-based systems hosted internally by each university or externally
by a provider. The developer’s recommendation is for REVIEW to be
externally hosted and undergo rigorous penetration testing with every
upgrade release. However, an internally hosted option is available. The
configuration of the system and Application Program Interface (API)
integration is essential for broad adoption, together with policy
approvals by faculty boards, heads of school, and course directors.
REVIEW features are continuously upgrading due to a collaborative
funding model that enables universities that require a particular
feature to pay for it to be included. For example, the Assurance of
Learning reporting system illustrated in Figure 9 was funded by the
University of New South Wales (UNSW) because of their requirement for
Business School accreditation by the AACSB (Association to Advance
Collegiate Schools of Business), and EQUIS (European Quality Improvement
System). They have used this module in REVIEW extensively for their
successful and continuing accreditation processes and maintain that
previous methods of collecting and compiling data for these reports was
onerous and time-consuming at the most highly pressured times of the
year. REVIEW has automated this process with a level of granularity that
has assured its adoption across a number of faculties. The collaborative
funding model is a progressive format that enables such Assurance of
Learning and other modules to be available for any other user of REVIEW
free of charge. Shared or individually funded features are specified,
and costs are then estimated by the software developers in Sydney.
Extensive modules together with smaller features are implemented with
ongoing upgrade versions. There is a REVIEW Users Group (RUG) jointly
run by UNSW and UTS as both an academic and technical forum for ideas,
feature requests, and upgrade presentations. 5. Reproducibility The
REVIEW tool has been adopted across multiple disciplinary and
institutional settings. The tool provides flexibility in terms of the
specific functions that are deployed in each setting, and how they are
expressed. For example, REVIEW can be used in disciplinary contexts
requiring accreditation by professional and educational bodies. In
business faculties at three Universities (UTS, UNSW, QUT), an ‘Assurance
of Learning (AOL)’ module has been introduced for this purpose. Multiple
institutions have adopted REVIEW, shared their practices, customisation,
and ‘wish lists’ for features (see ‘sustainability’). The development of
REVIEW features has been driven by users, and is testament to the value
academics see in its use. A key set of resources has been developed
across this work, to support both students and staff in use of REVIEW,
and their understanding of criterion-assessment, peer and self-review,
and graduate attributes. As part of the commercialisation process (see
‘sustainability’, the original REVIEW code was converted from Flash to
HTML 5, by a small external developer. This development was funded using
a collaborative model across institutions, allowing the development of
modules (such as the UNSW Assurance of Learning module) that other
institutions now have available to them. This model has thus seen a
sustainable and reproducible means to achieve enterprise level
implementation of the REVIEW tool. The commercial website ( see
‘sustainability’) gives some guidance to instructors, although further
funding could support the transition of resources to ‘open license’
materials to be shared through a key repository. Similarly, REVIEW
continues to be researched and developed, to build its capabilities and
ensure that it can be adopted across contexts. Further funding would
support this work; for example, a schools pilot is currently being
planned in both the UK and Australia. This pilot affords potential for
new research and development avenues, while also requiring a different
kind of support to the materials already developed. We are also actively
planning a project to investigate the quality of the qualitative
feedback that students receive, and the quality of their own
reflections, when using REVIEW. That research will extend REVIEW to
support staff and students in identifying and giving high quality
feedback – particularly important given the pedagogic value of students
giving feedback in peer-assessment contexts. References Boud,
D., Lawson, R., & Thompson, D. G. (2013). Does student engagement in
self-assessment calibrate their judgement over time? Assessment &
Evaluation in Higher Education, 38(8), 941–956. Carroll, D.
(2013). Benefits for students from achieving accuracy in criteria-based
self-assessment. Presented at the ASCILITE, Sydney. Retrieved from
https://www.researchgate.net/profile/Danny_Carroll/publication/264041914_Benefits_for_students_from_achieving_accuracy_in_criteria-based_self-_assessment/links/0a85e53c9f80a21617000000.pdf
Carroll, D. (2016, April). Meaningfully embedding program (Degree)
learning goals in course work. Presented at the Transforming
Assessment. Retrieved from
http://transformingassessment.com/events_6_april_2016.php
Cathcart, A., Kerr, G. F., Fletcher, M., & Mack, J. (2008). Engaging
staff and students with graduate attributes across diverse curricular
landscapes. In QUT Business School; School of Accountancy; School of
Advertising, Marketing & Public Relations; School of Management.
University of South Australia, Adelaide. Retrieved from
http://www.unisa.edu.au/ATNAssessment08/ Kamvounias, P., & Thompson, D.
G. (2008). Assessing Graduate Attributes in the Business Law Curriculum.
Retrieved from https://opus.lib.uts.edu.au/handle/10453/10516 Taylor,
T., Thompson, D., Clements, L., Simpson, L., Paltridge, A., Fletcher,
M., … Lawson, R. (2009). Facilitating staff and student engagement with
graduate attribute development, assessment and standards in business
faculties. Deputy Vice-Chancellor (Academic) – Papers. Retrieved
from http://ro.uow.edu.au/asdpapers/527 Thompson, D. (Forthcoming).
Marks Should Not Be the Focus of Assessment — But How Can Change Be
Achieved? Journal of Learning Analytics. Thompson, D. (2006).
E-Assessment: The Demise of Exams and the Rise of Generic Attribute
Assessment for Improved Student Learning. Robert, TS E-Assessment.
United State of America: Idea Group Inc. Retrieved from
http://www.igi-global.com/chapter/self-peer-group-assessment-learning/28808
Thompson, D. (2009). Successful engagement in graduate attribute
assessment using software. Campus-Wide Information Systems,
26(5), 400–412. http://doi.org/10.1108/10650740911004813 Thompson,
D., Treleaven, L., Kamvounias, P., Beem, B., & Hill, E. (2008).
Integrating Graduate Attributes with Assessment Criteria in Business
Education: Using an Online Assessment System. Journal of University
Teaching and Learning Practice, 5(1), 35.
Footnotes
-
/static/2016/12/image2.jpg ↩