What is Structured Peer Assessment?

Getting students to mark each others’ work has two huge advantage – it is brilliantly formative and it saves you a ton of time. But how accurate is it? And how do you validate that it really is helping students learn?

Structured Peer Assessment (SPA) is our patented system to deliver accurate, demonstrably-formative peer assessment. Here is what it looks like from the student perspective.

The theory behind SPA comes from Alastair Pollitt’s seminal 2004 paper Let’s stop marking exams (pdf) which introduced the concept of Comparative Judgement (CJ) and led to a number of initiatives.

The core idea behind CJ is this: instead of trying to mark one piece of work against a strict set of criteria – a scoring rubric – CJ presents the marker with pairs of answers and simply asks “which is better?” Multiple markers work in a team, so that each answer is assessed by several different people, against multiple other answers. By processing all these comparisons through an appropriate algorithm, those data can be converted into a rank order for all answers, and thus into grades.

The big benefit is that you escape from the straightjacket of highly-prescriptive rubrics that tend to reward rote learning of certain key words and phrases, and instead you can choose to reward such attributes as clear logic or narrative flair.

And the downside? Well, accuracy requires that each student’s answer be seen 30 times. Some of those decisions can be made almost instantly but some require considerable thought. Whilst it has been claimed that CJ is quicker than conventional marking, I and others have yet to be convinced.

But… if the students are doing the marking, that doesn’t matter! In fact it is a benefit. More time on task = more learning. We actually ask students to spend as long on the marking task as they did on the writing task.

This was the insight that led to SPA, but we didn’t stop there. We added an extra element: students must also state their reasons for preferring one answer over another. This does two things; it forces the student to think through their rationale, and it provides a trove of formative feedback for the original writer of the answer.

And to encourage students to write thoughtful, positive feedback, we gamify it by allowing students to reward each other with ‘badge points’ for particularly good explanations.

From all this, you actually get three three useful sets of marking data:

  • All the answers are ranked according to the collective decision of the students.
  • Students accuracy as markers is also ranked. Technically we take a statistical measure of conformance to consensus.
  • We also report a combined average of the two.

Depending on the summative function of the assessment, any one of these three can be converted to grades simply by deciding where the grade boundaries should go.

More importantly than that, SPA gives you a strongly formative assessment tool that really puts the students in charge of their own learning.