So four weeks into another phase of home learning and I’m pleased to write that not much has changed about my practice during lessons. Except perhaps the practical aspect. Every lesson still features a retrieval practice task. Pupils still need to think hard every lesson to recall knowledge. And pupils are still applying this knowledge creatively to their own personal projects. In Higher photography, this has involved quite a few multiple choice tests – partly because of the ease of setting this kind of task remotely and partly due to MCQ being the format of summative assessment by SQA.
Kahoot, mentimeter, quizizz and Microsoft forms are just some of the ways in which teachers are using MCQ to test recall remotely. Personally for me, forms have been such a game changer for this. Self-marking, instant feedback for learners in relation to why their response is right or wrong, instant grades imported into class grade book, ability to integrate images – the list goes on. Because it’s been my go to, this week, I started thinking about how effective this strategy was in terms of retrieval practice. I was inspired to do some research and it me got thinking about how we as teachers can make the most of this low stakes, high impact testing. Is using multiple choice quizzing really promoting true recall if we are providing the answers for our students? How possible is that learners just guess the correct answer? Are typed short answer quizzes more effective in forcing actual retrieval? And if there is evidence to support MCQ, then how can we use MCQ to promote deeper thinking amongst students? I am a huge fan of MCQ, but I have definitely become more knowledgeable about how best to use them. I hope this post might be useful to consider as we make up our next multiple choice quiz for students.
There has been a great deal of research done into the effectiveness of retrieval practice and the testing effect. Even if you are unaware of the term retrieval practice, you will most certain,y have used quizzes or low stakes testing in your classroom. Both are considered highly successful ways for students to move knowledge from the short term to long term memory by working hard to recall the information. This week marked the launch of @KateJones_teach second book on retrieval practice which I was so honoured to have been asked to contribute to. Like Kate, I’m so glad that the academic research on this is finally becoming more readily available to teachers in wonderful books such as hers. So, delving further into the strategy which I’ve embedded with pupils I was keen to understand if there were ways I could improve.
Firstly, I’ve discovered that part of the reason retrieval practice is so popular with students in my class, is because of the desirable difficulty with the task. The Goldilocks effect. Not too easy, not too hard. It’s achievable for all of them as it covers key knowledge we have learned together, yet tricky enough to engage and challenge them. I try really hard when developing questions to consider common misconceptions, challenge common errors and use easily confused knowledge to be tested. If it was too easy, there would be no sense of satisfaction. I would say that the average score in my recap quizzes is around 60-80% with some pupils consistently achieving higher. For me, this is important because if all pupils were achieving 100% then my quizzing would be too easy. Pupils would soon lose interest because there’s no challenge, no sense of accomplishment when they succeed. Likewise, there needs to be a sense of achievement, so similarly if my quizzes were too hard, there would be a lack of motivation, pupils would switch off because it is too difficult. Balance is key.
The argument that multiple-choice tests rely primarily upon recognition processes seems, on the surface at least, to be a reasonable critique of multiple-choice testing. Multiple-choice questions do, in fact, expose the correct answer to the learner by presenting it as one of the alternatives, which could obviate the need for retrieval. Not all multiple-choice questions, how- ever, can be answered through recognition processes alone.
Optimizing multiple-choice tests as tools for learning Jeri L. Little & Elizabeth Ligon Bjork.
This is where some understanding of how to best compose multiple choice questions is really useful.
Consider the answers to this MCQ.
Which camera control is used to effect depth of field?
A) sausages b) aperture c) sunshine d) chocolate.
Now hopefully this question quite clearly illustrates why MCQ might get a bad press. And I’m pretty sure that the most teachers wouldn’t use this way of quizzing, but it illustrates the point. This particular question is not a good example of recalling information for learners. There is only one plausible answer with several obvious red herrings, therefore students can guess the term they know is something to do with photography. For me, this is not effective retrieval practice. Instead, using plausible answers forces students to have to consider their schema around certain knowledge in order to choose the correct answer. Consider this as an alternative:
Which camera control is used to effect depth of field?
A) shutter speed b) aperture c)iso d) exposure
For me, this is far more effective. All answers contain knowledge we have covered in relation to photography. Students should recognise all of the terms, so they need students to be clear about which term is correct. Students need to understand the information in order to select the correct answer. Yes they could guess but it hopefully forces them to think in more depth.
Finally, generating answers which really force learners to think, to recall and to join the dots are some of the best multiple choice questions I’ve used. By using common errors, misconceptions and easily confused knowledge, I can as a teacher, really drill down into the learning of my pupils and work out how much they know. Consider this as a question:
A photographer increases the size of the aperture to change the depth of field. Which statement is correct?
A) the photographer has used a lower f number to create a wide depth of field
B) the photographer has used a lower f number to create a shallow depth of field
C) the photographer has used a higher f number to create a wide depth of field
D) the photographer has used a higher f number to shallow depth of field.
Without getting into too much of the detail of photography, the correct answer is B) However to get this right students need to demonstrate that they first know that using a lower f number gives a larger aperture, and secondly that the lower f number also gives a shallower depth of field. By using knowledge which they often mix up, I am able to rally force them to think. Any one of the possible answers is plausible and uses correct photographic terminology. What’s more there is a deeper level of understanding required in order to achieve the correct answer. Yes these types of question all assume that the learners know aperture is related to depth of field, but hopefully I would have used that as an earlier question to determine who knew that. I might use a mix of questions, layering the level of complexity of the question to really dig into depth of learning.
In which case, I think it’s also important that as teachers we are able to analyse the results. Too often we may perhaps be lured by the self-marking aspect of a quizzing tool however it’s valuable for us to go through pupil response to get a handle on where the errors have been made, so that we can identify next steps to clarify this for individuals. Purely recording scores is unlikely to move learners forward, whereas understanding the areas which need more work will hopefully continue to build on the success of retrieval practice for learners.
I’m sure this isn’t rocket science for many educators out there but I hope this post has been useful in highlighting some ways I’ve found it easy to improve this strategy for young people and my formative assessment of their learning.
Life is like a multiple choice question. Sometimes it’s the choices that confuse you, not the question