Logo

How consensus grading can help build a generation of critical thinkers

Instead of punitive testing and high-stakes exams, consensus grading helps students learn how to critique their own work. James Thompson encourages a real-time reflective approach to assessment

James Thompson's avatar
29 Nov 2023
copy
0
bookmark plus
  • Top of page
  • Main text
  • More on this topic
A pawn looks in the mirror and sees a queen

Created in partnership with

Created in partnership with

The University of Adelaide

You may also like

Designing online assessment to prevent academic misconduct
Advice on designing online assessment that reduces the opportunity or temptation to cheat

Good assessment is critical to effective education design, yet assessment practices continue to contradict many key education objectives. Within the health professions, tensions between these conflicting issues frequently surface when assessing student practice.

We want students to prioritise learning for the long term, to engage with critical feedback and to be capable of applying their skills and knowledge in a range of different ways, yet we administer heavily standardised tests and high-stakes exams to judge work on a single outcome, use deficiency-based marking approaches to find faults and exclude students from decision-making about the quality of their own work. We want students to appreciate the complex demands associated with human care, yet use reductionist testing approaches that fail to represent authentic context or lack nuance. We know that key learning moments are often linked to the mistakes we make, yet these are not accepted in education.

These challenges helped fuel the design of the student consensus grading, which drew inspiration from the debriefing practices of paramedic instructors. They would first seek to hear from students and gain an understanding of their practice rationales before offering any judgement or guidance of their own.

This approach is now widely used in a range of health disciplines and typically features three phases of assessment. Instructors first observe student practice, similarly to traditional approaches with the same expectations and standards applied. The exception is when the assessor withholds communicating any feedback or judgement.

Students are then required to lead on a critical appraisal of their own work. Using the holistic criteria linked to their specific discipline, they self-appraise their performance and share rationales and justification for their actions. This introduces a new metacognitive element to testing. Unlike previous reflections, this is done in real time and without the bias created through the prior knowledge of how someone else has already judged their work.

Assessors are then able to complement their observations with insights into student understanding, having heard their student’s interpretation of events. Only then do assessors share their critique, first with an overall appraisal of the work. This is often evaluated as a separate grading component. Next, they critique the student’s evaluation, awarding marks where they agree. As a result, students who make mistakes but can correctly identify them are rewarded – an innovative feature of this approach. After a decade of consensus marking use, this is what we recommend.

Think about who your test is created for

Academics can fail to prioritise students’ interests in assessment because of the pressure to evidence defensible standards and satisfy internal and external authorities. Consider how your test benefits the future professional interests of your students.

Keep things real

Even high-fidelity simulated learning is let down when assessment is not authentic. Student work should be judged using the same expectations and standards that a student can see applied to their future practice in the real world. Avoid excessively detailed assessment rubrics which are redundant beyond the classroom.

Decisions should fit the work, not the work fit the decision

Decisions should fit the work, rather than the other way round. Approach each assessment ready to consider a range of acceptable possibilities. Don’t try to predict each outcome with rigid rubrics that force complex humanistic care into a predetermined quantified box.

Experts make mistakes, too

Even the best professionals make mistakes. What makes them an expert is not an absence of practice problems, but their ability to self-identify and respond. Assessment should reward rather than penalise these essential professional attributes. Promoting unrealistic standards encourages students to conceal mistakes or challenge where they are penalised. Owning and responding to mistakes in practice is a far healthier and more reasonable approach. Students are very good at self-critiquing if you create the right learning climate.

Hold a mirror to your reflective practice

Donald Alan Schon, who developed the concept of reflective learning, considered reflection and practice to be tightly bound entities. Avoid token student tasks, which are in effect asking students to validate how an assessor judged their work and then submit their response long after the event occurred.

Test often and programmatically. Meaningful decisions about a student’s capabilities should be separated from the outcome of a single test.  Refocus on what a student does as a result of a test and how it affects their performance trends over time.

Have a strategy to support change

Expect resistance when asking colleagues to consider surrendering the control of assessment decisions to their students. Colleagues and assessors need to see how the approach can empower all parties and improve their assessment quality.

Consensus grading offers a way of capturing reflection in real time. When students contribute to their own grade decisions, they are able to exercise the same skills they will later require in professional roles. With little change required to retrofit existing practical assessments, this approach better informs academic decision-making with regard to student abilities and eliminates chance results. The rich, student-led discourse generated by each event balances out punitive assessment perceptions and addresses questions of fairness. When assessment is configured in this way, testing can accommodate a range of justifiable outcomes.

James Thompson is a senior lecturer in health and science practice at the University of Adelaide.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

 

Loading...

You may also like

sticky sign up

Register for free

and unlock a host of features on the THE site