If you’ve ever opened a brand new textbook to find an erratum slip, you will already be familiar with one of the core problems of book publishing. Once printed, your content is set in stone. Get on the web, problem goes away. Find an error tonight, it’s changed by morning.
But what about things that aren’t so much errors as subtle weaknesses? The challenge isn’t so much to change them, as to spot them in the first place. Educational publishers have long since relied on our customers to challenge gross infelicities that have crept through our (usually excellent) quality control procedures. But what about subtly misleading wording? Generally, it’s been down to the teacher to cope with the confusion thus generated.
Now, this too is starting to change. Tools are emerging which enable us to see exactly where we have inadvertently induced confusion instead of clarity. Yesterday, Miranda and I spent a productive couple of hours applying them to our betselling Online ICT Assessment for KS3. The results were interesting.
If you’re already a Yacapaca user, you’ll be familiar with the Analyse screen that allows you to see aggregated results of a particular assessment across a specific set or class. It’s very useful for spotting areas that need extra reinforcement, or where a class is carrying a specific misconception that needs addressing.
Assessment authors have access to a similar screen, but one that shows results across the entire national cohort (for data protection reasons they can’t select individual schools or teachers, btw). A good question would have results looking something like this:
- Key 66%
- Distractor 1: 17%
- Distractor 2: 8%
- Distractor 3: 9%
- Timeout: 0%
Two thirds of the cohort are getting it right, so we’ve pitched it at the right level for the students. Distractor one represents a common misconception, but at 8 and 9%, distractors 2 and 3 are not so obvious as to make it easy to simply guess the correct answer.
Now take a look at this question:
What did Tim do to make the first text example below look like the second example?
(there’s an image of two pieces of text; the upper one is large and bold)
- Key: changed the size and made it plain text: 36%
- Distractor 1: made it italic and changed the size: 4%
- Distractor 2: changed the font and made it plain text: 9%
- Distractor 3: changed the size and made it bold: 50%
- Timeout 0%
Clearly, with more students choosing Distractor 3 than the Key, there’s something seriously wrong here. But what? The key is in the pattern of responses. All but a few students know that Distractors 1 and 2 are incorrect; clearly they do know the subject area. The Key and Distractor 3 are pretty much mirror images of each other, and this is the clue we’re looking for. The text of the question is logically correct, but as students’ brains try to correlate ‘first’, ‘second’ and ‘below’ with a picture of two pieces of text, it’s a wonder they don’t go into trance.
Now we’ve found the problem, we can solve it. Simply numbering the examples and removing the confusing reference to ‘below’ would probably be enough. In fact, we’ve decided to expunge it, along with a couple of other offenders, from the bank altogether when we introduce a new set in a couple of weeks.
So far, we’ve only applied this philosophy to multiple choice, but you can imagine it being progressively applied across all electronic resources as we develop the means to do so. Rather than watching textbooks falling progressively out of date, you should expect your teaching resources to get steadily better with age.