If you look in your gradebook for any given student set, you will see that all the results are reported in the grade scheme you chose for that set. It’s easy to take that for granted and not think about how it is achieved – at least I hope it is, because we have worked hard to make the enormous complexity of that task invisible to casual users.
Although teachers very rarely challenge the accuracy of Yacapaca results, I do occasionally get asked about their authority. Who decides, and by what right? The answer that might surprise you is that you do, by consensus with 100,000 other Yacapaca members.
Here’s how it all works:
In order to convert a quiz percentage result to a grade, we need to know how difficult the quiz was – and they vary. A lot. Our early research showed that whilst the author of any given quiz will certainly have an intended level in mind, s/he is often mistaken about the real difficulty level. We need to rely on hard data.
If we already know the attainment level of each student who takes the quiz, we can derive the quiz difficulty via statistical regression. But as we derive student ability from their quiz results in the first place, this quickly becomes circular.
To break that circle, we have a source of data that matches grades to students – Quick Assignments (QAs) and Offline Assessments (OAs). When a teacher manually enters a grade for a student who has also completed a quiz, that provides us with an equivalence. Just one equivalence is not much use, but we have diligently collected tens of thousands of them, all neatly tagged to subject, syllabus, topic, grade scheme and student age.
On a nightly basis, we use this data perform a statistical regression analysis, in multiple steps, to derive the difficulty level of each quiz. Having done that, we can report results reliably as grades in your preferred grade scheme.
To reiterate the principle, we do not rely on any one expert to decide the difficulty level of a quiz. Instead, we use consensus calibration; correlating the marking decisions of thousands of teachers, about tens of thousands of students, to each individual quiz.
So as an author, what should you do?
To ensure that your quizzes are accurately levelled, do at least two of these, and preferably all three.
- First, calibrate your own student sets by adding some OAs. Set regular (non-quiz) homeworks as QAs, and grade them online. If permitted by your school, follow the trend and switch to using the Yacapaca Gradebook as your standard gradebook. This way, you provide more calibration data about your students for Yacapaca to use when levelling your new quizzes.
- Second, from time to time, assign quizzes you did not author to your students. As those quizzes are already calibrated, this gives you an external reference to support your own levelling judgements.
- Third, encourage other teachers, especially from other schools, to assign the quizzes you have created. This helps you tap into the consensus of the entire profession – or at least the subset of it who use Yacapaca, which is pretty large now. The best way to encourage use is to package your quizzes up into a well-organised course.
I am fairly certain that Yacapaca is unique in providing consensus-based levelling for such an enormous range of assessment content – including assessments that you yourself have written. The great thing about it is that the more you use it, the more accurate it becomes, and better-able you are to demonstrate progression from the Gradebook.