I believe I can now truthfully claim that Yacapaca is the only true formative assessment service for schools.
Here’s how our new peer-written formative assessment process works:
1. After approximately one question in ten, students are presented with the screen below. They are asked to write a feedback statement for other students, in 150 characters or less – the size of an SMS or a tweet. They only see this screen if they got the answer right, and it is not compulsory – there is always the option to skip this activity.
The requirement to write for other students seems to trigger a directness and clarity that you often don’t get when the same content is written for the teacher.
2. Very weak and inappropriate answers are filtered out by the software, then the peer feedback statements are presented to in sets of three, linked to the specific answer they chose. This only happens on questions where we have gathered enough feedback statements.
Voting for the best option pushes the students into evaluative thinking – the top level of Bloom’s Taxonomy. Knowing that the answers come from other students, and not from authority, is also hugely important. If you look at the example below, it is quite clear that a rote learning approach is not going to get you to the ‘right’ answer, both because there isn’t one, and because none of these is perfect. (Which would you choose?)
Each action wins the student motivation points which can go towards a better avatar, or even a real-world prize. They don’t currently count towards the final grade from the quiz.
The process is continuous. When a new formative feedback statements are submitted, the weakest old statement is ‘voted off’ to make space for the newcomer. The statements just get better and better over time.
I have been monitoring the results of this closely, and I am chuffed to bits with how well it is going. The quality of statements written ranges, as you would expect, from the blitheringly chaotic to the truly sublime. Students are in general better than many teachers at writing statements that are genuinely formative, rather than trying to spoon-feed their peers with the right answer.
The algorithmic filters successfully catch 99% of the silly responses. More importantly, students turn out to be extremely reliable in selecting feedback statements that are succinct and appropriate. I have yet to see a wholly incorrect statement win a round.
We still have plenty of work to do, adding support for more question types and variants. There is also the question of how to report the results of this work in terms of grades. And I would love to design an experiment that tested how much formative feedback actually boosts results when it comes to exam time.
If you have feedback on how well peer formative feedback is (or isn’t) working in your school, please let me know in the comments.