Are all exam questions created equal? Not really—different type of questions test different levels of understanding. In the UGME program, we use a variety of exam questions to assess student learning—broadly classified as multiple-choice questions (MCQs) and short-answer questions (SAQs). But within these broad categories are a range of types of questions designed to test different levels of cognition. We use these different types of questions at different points both within courses and within the program.

Based on Bloom’s Taxonomy

 Bloom’s taxonomy is a classification system used to define and distinguish different levels of human cognition—thinking, learning, and understanding. The taxonomy was first developed in the 1950s by Benjamin Bloom and further revised by him in the 1990s. In his original version, there are six levels of cognitive behaviours that explain thinking skills and abilities of learners. The original six levels of cognition as described by Bloom are: knowledge, comprehension, application, analysis, synthesis and evaluation. Educators have used Bloom’s taxonomy to inform or guide the development of assessment, such as with the construction of MCQs. MCQs are widely used for measuring knowledge, comprehension and application of learning outcomes. Our curriculum uses MCQs in different assessment formats, for different purposes, and those are described below.

 

Screen Shot 2014-09-29 at 1.19.44 PM

You may hear acronyms and terms about assessment in our UGME program: RATs, MCQs, SAQs, Key Features. Here is a brief description of each:

Readiness Assessment Tests (RATs)

RATs used in our curriculum often consist of 10-15 multiple-choice questions that are linked directly to the readings (and/or prior lectures). A RAT focuses on foundational concepts that will be important for following SGL activities. MCQs found on a RAT, test for knowledge (i.e., recall information) and less for application of knowledge. Examples of verbs used in the question stem that would test knowledge include: define, list, label, recall, select, name, outline, or match.Filling in bubble test

Multiple-choice questions (MCQs): on midterms and finals

There are three components to an MCQ: the stem, lead-in question, and options that consist of one correct answer and typically three distractors (wrong answers). The stem should be directly linked to a learning objective assigned to a course. MCQs that are used on midterms and final exams often test for comprehension and application of knowledge; this is beyond the recall information that is typically the case with MCQs on RATs. Some multiple-choice questions may assess simple recall, depending on the learning objectives of the course but should be kept to a minimum. Verbs used in the question stem to test comprehension include: predict, estimate, explain, indicate, distinguish, or give examples. Verbs that would test application include prompts such as: solve, compute, illustrate, interpret, demonstrate, or compare.

Short-answer Questions (SAQs)

SAQs typically are composed of a case scenario followed by a prompt that requires a written answer that varies in length from one or two words to several sentences. SAQs often test the higher cognitive skills in Bloom’s taxonomy. Final examinations in our curriculum are typically composed of a mix of MCQs and SAQs. To test analysis, verbs in the question stem include: explain, arrange, select, infer, calculate, or distinguish. Verbs such as develop, design, plan, devise, formulate, or generalize test for synthesis, whereas verbs in the question stem to test evaluation include: argue, assess, estimate, justify, predict, compare, conclude, or defend.

Key Features Questions

Key features problems are used by the Medical Council of Canada for the assessment of clinical decision-making skills in the MCCQE Part 1. Key features problems have a case scenario usually followed by two or three questions, each question testing one or more key features. A key feature is defined as a critical step in the resolution of a clinical problem, and key-feature problems consist of clinical case scenarios followed by questions that focus only on those critical steps. While knowledge is an important feature for effective problem solving, the challenge posed by key features problems is the application of knowledge to guide clinical decision-making. For each question, instructions may require selection of whatever number of responses is appropriate to the clinical tasks being assessed, and there may be more than one response in the answer key. The development of key features problems for clinical decision-making is being piloted in the Clerkship curriculum courses this year.

How do we administer our tests?

Queen’s Undergraduate Medical Education has moved to an electronic exam system called ExamSoft for the administration midterms and final exams in Preclinical and the Clerkship curricular courses. Medical students no longer write exams on paper; rather they do it all on laptops. This greatly facilitates marking of exams, and it means we are no longer managing huge volumes of paper and deciphering student handwriting.

 References:

  1. http://www.nmmu.ac.za/cyberhunts/bloom.htm
  2. https://www.utexas.edu/academic/ctl/assessment/iar/students/plan/method/exams-mchoice-bloom.php
  3. http://www.profoundlearning.com/index.html
  4. Page, G., Bordage, G. & Allen, T. (1995). Developing Key-feature proglems and examinations to assess clinical decision-making skills. Academic Medicine, 70 (3).
  5. mcc.ca/wp-content/uploads/CDM-Guidelines.pdf
  6. Laura April McEwen, OHSE 2011, MCQ Checklist