Featured article: Andrew C. Butler, Journal of Applied Research in Memory and Cognition, 2018.

Qstream, at its core, utilizes questions to measure each learner’s knowledge and to generate long-term retention and deeper understanding (AKA the ‘testing effect’ or ‘retrieval practice’). Do the best practices of question construction for assessment align with the best practices for learning? In his excellent article in the Journal of Applied Research in Memory and Cognition, Dr. Andrew C. Butler explores this and explains how both goals can be effectively achieved through several best practices.

Best Practice #1: Avoid Using Complex Item Types Or Answering Procedures

Many educators use complex multiple-choice (CMC) questions in an attempt to measure higher order thinking but the end result is usually a muddle. Learners tend to use strategic guessing and are less likely to engage in the desirable cognitive processing intended by the question creator. One example of a CMC question-type requires the complex ‘A and B but not C’ type of response. The research shows that CMC questions have lower reliability, do not offer any greater validity and are not inherently better than simpler question types at measuring higher-order thinking.

Best Practice #2: Create Items That Require The Engagement Of Specific Cognitive Processes

On my medical board exams, I was presented with multiple-choice questions that were simple but engaged higher-order thinking. For example, “Given that the patient has these symptoms and these lab test results, how would you treat them?” To answer this question, I needed to understand the symptoms, interpret the lab results, generate a diagnosis and then be familiar with the best treatment for that condition. It was not easy! As stated by Dr. Butler, “The cognitive processes induced by each multiple-choice item should be carefully designed…(to) align with the type of cognitive processing that will be required for future performance.”

Best Practice #3: Avoid Using ‘None-Of-The-Above’ And ‘All-Of-The-Above’ As Response Options

These response types turn a simple multiple-choice item into a CMC which can be gamed by the learner. Finding one wrong answer on the list of choices eliminates ‘all-of-the-above’ (AOTA) as a correct response. Similarly, finding one correct answer on the list of choices eliminates ‘none-of-the-above’ (NOTA) as a correct response. Qstream’s multiple-correct answer format allows question-designers to avoid this pitfall by allowing learners to submit multiple responses to a single question, forcing the learner to ponder each answer choice rather than jumping to a single answer.

Best Practice #4: Use Three Plausible Response Options

This one is news to me. In my research studies at Harvard, I usually utilized questions with four response options (one correct answer and three incorrect but plausible distractors). While this four-response-option format is still very effective, Dr. Butler cites research that using three response options (the correct answer and two lures) provides the best balance between psychometric quality and the efficiency of administration. Importantly, question designers should make sure that each response option used is a plausible response, not an easily dismissed alternative that does not generate meaningful cognitive processing by the learner.

Best Practice #5: Create Multiple-Choice Tests That Are Challenging But Not Too Difficult

Dr. Butler cites research from the 1950s showing that the ideal difficulty level for measuring learning (discrimination) is a bit higher than the midpoint between chance and perfect performance. Thus, for a three-alternative question, chance is 33%, perfect is 100%, the midpoint is 66% and the ideal difficulty would be about 77%. This target difficulty also works well for learning. In my studies, I tried to construct questions that 60-80% of learners would answer correctly on first presentation. I found that this was high enough to enable students to feel that they were performing well but also left a significant amount of material on which Qstream could improve their performance.

Bonus Best Practice: Provide Feedback

This is a core component of Qstream’s effectiveness and popularity. Qstream questions are highly engaging for the learner because they want to find out the answer to the question and, specifically, to see if they answered it correctly. The feedback provided in Qstream is a very tangible and effective reward. Beyond this, the feedback also plays a major role in expanding the learner’s knowledge in the domain of that question. Equally as important, feedback also prevents learners from embedding the incorrect answers into their longer-term memory.

Looking for more knowledge reinforcement best practices? I encourage you to read our white paper, Scientifically Proven Strategies for Building Long-Term Knowledge Retention.

Download Now

Share it with all your network