September’s competition was held over three days this week, and was open to all experienced Yacapaca users. There were three prizes (see below) of subscription upgrades. We had a better-than-expected field:
- 107 signups
- 66 entries
- 553 comparisons with comments
- 159 votes
The question I posed was: “Please explain the main problem that Yacapaca solves for you, that you have not been able to solve (or to solve as easily) in other ways.”
There were three winners, each chosen by your votes. All I did was read the results off my admin screen.
Stage 1 winner: Leah Class from Edeavour High School
Leah’s answer had the greatest resonance for the most people. Note it wasn’t the “best” but rather the one most participants agreed with. Here it is:
Yacapaca enables me to provide end of unit tests for my students without having to create them from scratch myself. I find the quizzes very interactive and like the colourful graphics which are very student friendly. I also like the way different quizzes are contributed by teachers around the country so there is a variety and different approach to the same subject. I really like the markbooks which enables me to see what levels my students have achieved and then I can feed this information into my school mark books and show that I have created assessment opportunities for my students. I have not yet created my own end of unit tests but I should be inspired to do this myself. Unfortunately time is an issue as always in teaching and in fact after a very long and busy day and series of meetings I am now online doing this and some preparation for my lessons tommorow.
Stage 2 winner: Hannah Bowen from The Appleton School
Hannah’s judgements when comparing answers were closest to the overall consensus. You may wonder why there is a prize for that. The reason goes back to why I ran the competition in the first place. I wanted to know what aspects of Yacapaca constitute the core of what Yacapaca “is” in the minds of its users. I needed to know where the consensus is, where other methods are biased towards the opinions of the most eloquent or persistent commenters. Putting a prize on this stage gives everyone the incentive judge towards the consensus.
Stage 3 winner: Derek Roberts from Downham Market High School
Derek’s comments gathered the most votes by quite a margin. When I use this software as a learning tool (typically for CPD), the comments are more valuable than the initial question answers, hence they get a prize. Here is one of Derek’s 10 comments, which I think is fairly representative.
I have selected this answer because the author has mentioned the use of the white board charting feature being used to encourage the use of teams and building a sense of responsibility to others and also that the problem solved was to find a way to test in a fun and interactive way.
So, how will Yacapaca development change as a result of this?
I’m still digesting the enormous amount of material generated, but I thought I’d share the first thing to come out of it which is the tag cloud.
These are keywords drawn from all the answers. Size indicates frequency and colour means importance. You can see instantly what Yacapaca is all about: feedback! I’m already brainstorming different ways for Yacapaca to deliver even better feedback in the future. By the way, I’ll give a special mention to the first person to spot the ringer in this tag cloud, and give the correct explanation of how it got there. If you know anything about computational linguistics, it’s fairly easy (hint).
5 responses to “September competition results”
Nice competition, good idea. Unfortunately I fell at an early hurdle since I simply cannot complete the task by the daily deadline – and extension to 6.00pm might better enable teachers with a busy timetable to join in.
Um, actually, the deadline was midnight each day. Plenty of time!
Dammit. My mistake then – somewhere along the line I picked up the idea it was a 4.00pm deadline. If I’d realised I was in time I’d have kept going – oh well… next time. It’s a great idea.
ringer = basin – presumably because it (program) tries to pick up root words (e.g. assess could also be from assessing or assessed) and basing was the actual word used.
Spot on, David! The word it tripped over was ‘baseline’ which I presume was not in the corpus at all. Or perhaps the algorithm just picked up my ambivalent feelings about those ICT Baseline tests.