Below is an analysis of a pair match (AKA Drag and Drop) question from the Computing Baseline (High level). This is the author’s view, using data from about 1,500 responses. The teacher’s view shows only the data from one assignment, but the principle is the same. You may need to click on the screenshots to see them full size.
It is very simple. Students have to drag four items into order. Like all good pair match, the wordage is kept short and concise, so we can be reasonably confident that this tests students’ understanding of the concept.
The analytics show the rate of selection of every possible pair. It’s easy to just see a lot of orange and imagine that the students are doing OK. Not so. Let’s look at just the top four pairs:
Think of this as just a single multiple-choice question: “Which is Step 4?” The correct answer is “Evaluation” and 58% of students got it right.
Is that good or bad? Well, 25% of randomly dragging-and-dropping monkeys would have selected “Evaluation” too. Half way between 25% and 100% is 62.5%, so if 62.5% of students had selected ‘Evaluation” then you could have concluded that half your students actually knew the answer, or could work it out. But here we have only got 58%, so in fact, fewer than half the students knew that the last step was ‘evaluation.
Now look at the four pairs of Step 3 and you can see that only 36% got it right. It’s a similar story for Steps 1 and 2.
It is pretty clear that if we look across the whole cohort, there is some dawning awareness that Evaluation is the end-point of the journey, but beyond that, they are pretty clueless.
The simple rule for Pair Match analysis is to initially treat each group of pairs as a single multiple-choice question. Only once you have done that should you synthesise this into an understanding of what is going on for the class.