Everything you need to know about exam questions types in our curriculum!

Are all exam questions created equal? Not really—different type of questions test different levels of understanding. In the UGME program, we use a variety of exam questions to assess student learning—broadly classified as multiple-choice questions (MCQs) and short-answer questions (SAQs). But within these broad categories are a range of types of questions designed to test different levels of cognition. We use these different types of questions at different points both within courses and within the program.

Based on Bloom’s Taxonomy

 Bloom’s taxonomy is a classification system used to define and distinguish different levels of human cognition—thinking, learning, and understanding. The taxonomy was first developed in the 1950s by Benjamin Bloom and further revised by him in the 1990s. In his original version, there are six levels of cognitive behaviours that explain thinking skills and abilities of learners. The original six levels of cognition as described by Bloom are: knowledge, comprehension, application, analysis, synthesis and evaluation. Educators have used Bloom’s taxonomy to inform or guide the development of assessment, such as with the construction of MCQs. MCQs are widely used for measuring knowledge, comprehension and application of learning outcomes. Our curriculum uses MCQs in different assessment formats, for different purposes, and those are described below.

 

Screen Shot 2014-09-29 at 1.19.44 PM

You may hear acronyms and terms about assessment in our UGME program: RATs, MCQs, SAQs, Key Features. Here is a brief description of each:

Readiness Assessment Tests (RATs)

RATs used in our curriculum often consist of 10-15 multiple-choice questions that are linked directly to the readings (and/or prior lectures). A RAT focuses on foundational concepts that will be important for following SGL activities. MCQs found on a RAT, test for knowledge (i.e., recall information) and less for application of knowledge. Examples of verbs used in the question stem that would test knowledge include: define, list, label, recall, select, name, outline, or match.Filling in bubble test

Multiple-choice questions (MCQs): on midterms and finals

There are three components to an MCQ: the stem, lead-in question, and options that consist of one correct answer and typically three distractors (wrong answers). The stem should be directly linked to a learning objective assigned to a course. MCQs that are used on midterms and final exams often test for comprehension and application of knowledge; this is beyond the recall information that is typically the case with MCQs on RATs. Some multiple-choice questions may assess simple recall, depending on the learning objectives of the course but should be kept to a minimum. Verbs used in the question stem to test comprehension include: predict, estimate, explain, indicate, distinguish, or give examples. Verbs that would test application include prompts such as: solve, compute, illustrate, interpret, demonstrate, or compare.

Short-answer Questions (SAQs)

SAQs typically are composed of a case scenario followed by a prompt that requires a written answer that varies in length from one or two words to several sentences. SAQs often test the higher cognitive skills in Bloom’s taxonomy. Final examinations in our curriculum are typically composed of a mix of MCQs and SAQs. To test analysis, verbs in the question stem include: explain, arrange, select, infer, calculate, or distinguish. Verbs such as develop, design, plan, devise, formulate, or generalize test for synthesis, whereas verbs in the question stem to test evaluation include: argue, assess, estimate, justify, predict, compare, conclude, or defend.

Key Features Questions

Key features problems are used by the Medical Council of Canada for the assessment of clinical decision-making skills in the MCCQE Part 1. Key features problems have a case scenario usually followed by two or three questions, each question testing one or more key features. A key feature is defined as a critical step in the resolution of a clinical problem, and key-feature problems consist of clinical case scenarios followed by questions that focus only on those critical steps. While knowledge is an important feature for effective problem solving, the challenge posed by key features problems is the application of knowledge to guide clinical decision-making. For each question, instructions may require selection of whatever number of responses is appropriate to the clinical tasks being assessed, and there may be more than one response in the answer key. The development of key features problems for clinical decision-making is being piloted in the Clerkship curriculum courses this year.

How do we administer our tests?

Queen’s Undergraduate Medical Education has moved to an electronic exam system called ExamSoft for the administration midterms and final exams in Preclinical and the Clerkship curricular courses. Medical students no longer write exams on paper; rather they do it all on laptops. This greatly facilitates marking of exams, and it means we are no longer managing huge volumes of paper and deciphering student handwriting.

 References:

  1. http://www.nmmu.ac.za/cyberhunts/bloom.htm
  2. https://www.utexas.edu/academic/ctl/assessment/iar/students/plan/method/exams-mchoice-bloom.php
  3. http://www.profoundlearning.com/index.html
  4. Page, G., Bordage, G. & Allen, T. (1995). Developing Key-feature proglems and examinations to assess clinical decision-making skills. Academic Medicine, 70 (3).
  5. mcc.ca/wp-content/uploads/CDM-Guidelines.pdf
  6. Laura April McEwen, OHSE 2011, MCQ Checklist

 

 

 

 

 

 

 

Posted on

What do p and R-values mean anyhow? : Understanding how to interpret multiple-choice test scores using statistics.

Have you ever wondered whether or not your multiple-choice questions (MCQs) are too easy? The answer to this question can be found in the p-values or item difficulty: the percentage of students who answered correctly. The difficulty of a MCQ can range from 0.00 to 1.00; the higher the p-value, the easier the question. What we should be concerned with are high difficulty questions with p-values less than 0.3.

Have you ever wondered which questions tricked students who otherwise performed well on a test overall? The R-value or item discrimination looks at the relationship between how well students performed on a question and their total score. Item discrimination indicates students who know the tested material and those who do not. The higher the R-value, the more discriminating the test question. We should try to remove questions on the test with discrimination values (R-values) near or less than 0.3. This is because students who did poorly on the test did better on this question than students who performed better overall.

Did you Know?

Multiple-choice questions that use words in the stem such as best, most, first, or most correct require higher-level thinking but often confuse students because they are ambiguously worded. Our students have struggled lately with ambiguity in the wording of MCQs on RATs and exams such as “Which is the most likely….”. They assume “most likely” to be “most common”, whereas the most likely answer could be an uncommon situation. It’s important to word the question clearly so that students are not confused. So for example, the question could state, “In light of the clinical information provided above, which diagnosis would you make?

You can also ask students about “most common”, “most concerning”, or “what is the first test you would perform” etc. but it is always good to anchor these stems by referring to the data presented previously. Then the key is to require them to choose, evaluate, interpret, judge, infer from data, solve problems, and apply principles.

Did you Know?

The Student Assessment Committee has posted several articles, checklists and PowerPoint slides to assist you with Multiple Choice Questions.

For more guidance on writing high-quality multiple-choice questions refer to MCQ Guidelines and Writing MCQ’s in School of Medicine Faculty and Staff Resources at:

http://meds.queensu.ca/home/faculty_staff_resources/assessment_resources

 

References

http://ctl.utexas.edu/programs-and-services/scanning/interpreting-results/

http://www.washington.edu/oea/services/scanning_scoring/scoring/item_analysis.html

Queen’s School of Medicine: Faculty and Staff Resources.
http://meds.queensu.ca/home/faculty_staff_resources/assessment_resources

Posted on