Can you baseline Computer Science?

Assessing Computer Science is a very different challenge from assessing ICT, so a different approach is needed. With ICT, there is standard content that we can expect all students to have covered. Computer Science is far more varied. For example, we can expect every student to have studied at least one programming language – but which one? For this reason, there is no possibility of a single CS baseline test, nor should CS content be included into an ICT baseline test.

Our approach instead has been to create an entire Computer Science section for Yacapaca. This contains a rich mix of content, all results of which can be reported as National Curriculum levels. Because there are no defined level statements for Computer Science, we use consensus grading; with enough data (>50 questions), Yacapaca will tell you the level a notional average teacher would assign to your student.

Classroom observation at City Academy Norwich

CAN

I visited City Academy Norwich on Friday, to observe some lessons with Matt Wells and Jez Thompson. It was a particular pleasure to meet Jez; I have known him for donkey’s years, but we had never actually met.

What struck me about the three lessons I observed was that there was no chalk and talk whatever. None. I don’t know if this is a school policy, but if it is then I approve. All the research shows that talking at kids doesn’t work; I suspect it actively inhibits their ability to learn by turning them off the whole experience. What I saw at CAN was kids on task, all the time. Some of that was Yacapaca, some was other activities.

This was also my first opportunity to observe a Year 9 Computing lesson. I was amazed and delighted at the range of tasks the students were undertaking, and the level of engagement the tasks generated. Jez had each student either working on their own task, or in pairs, on a carrousel system. I had not seen this done before in such a fine-grained way, and it was extremely effective. Students were focused much more on their own tasks than what their peers were doing, and as a result were almost completely self-managed.

My actual aim in going was to test out our (very) experimental “Tortoise and Hare” quiz template. And I’m glad I did, because results were fairly mixed and I think I would have missed the nuances of this had I relied only on third-party reports and log data. T&H has potential, but it’s a long way from ready. We shall iterate, and test again. I suspect there will be several rounds of testing before I am happy with it, so don’t expect to see it in production any time soon.