Clinical Problem Solving: A student and a teacher talk about lessons learned from an online course
By Heather Murray, MD, and Eve Purdy, MD Candidate, 2015
For many medical students, the process involved in turning a presenting complaint into an appropriate and focused differential diagnosis seems like a big black box. For clinicians who do this many times every day, the process is unconscious, and it is hard to explain to medical student learners how to break it down. Both students and teachers sometimes struggle with how to transition early medical learners to competent diagnosticians.
So, when a clinician (Heather Murray) and a second year medical student (Eve Purdy) independently stumbled across the link to a Massive Open Online Course (MOOC) on Clinical Problem Solving offered through Coursera both of us jumped at the opportunity to learn more about diagnostic reasoning. Eve registered with the hope of shedding light on the type of problem solving that she might be faced with in clerkship, while Dr. Murray registered with the intention of improving her teaching around diagnostic reasoning for students.
Though it is difficult to summarize the six-week course in one blog post there were a few takeaways from the course that we will outline. These key points might help medical students improve clinical reasoning and the same tips might help teachers in clarifying the process for learners. Much of this approach to clinical reasoning comes from the NEJM article “Educational Strategies to Promote Clinical Reasoning” by Judith Bowen (2006).
1. Organize the way you learn about diseases using Disease Illness Scripts
If you have a structured approach to the way you learn about diseases, then you will be more efficient at recalling that information and comparing diseases effectively. One way to organize information is into “Disease Illness Scripts”. This requires organizing information about the conditions into four broad categories.
|-who gets the disease?-what are the risk factors?
-making a mental picture of who you would expect to see with the disease can help
|-over what time period does the condition present?
acute on chronic
-a good way to think about this is where you would expect to see the patient (ER, vs walk-in vs family doctor)
|-what are the symptoms?
*key features are signs and symptoms that are essential to the diagnosis
*differentiating signs and symptoms are those that make this disease different then diagnoses that present similarly
*excluding signs and symptoms are those that, if present, exclude the disease
|-describe and understand the underlying disease mechanism|
2. Organize the way you think about patients using Patient Illness Scripts
When thinking about patients try to frame their presentation using the same structure as the disease illness scripts.
|What important risk factors does the patient have-age
-relevant medical history
-presentation specific risk factors i.e. recent transcontinental air travel in a patient with shortness of breath
|How long has the patient had the symptoms, have they changed?||What symptoms and clinical signs does the patient have?
-try to group as many as possible to shorten the list (e.g. group febrile, tachycardic and hypotensive as septic)
3. Compare disease illness scripts and patient illness scripts to generate a tiered differential diagnosis
Generate a differential diagnosis based on the chief complaint. You can compare your understanding about each disease on your differential with your patient using the illness scripts easily. Pay close attention to key features, differentiating features and excluding features. The closer a disease illness script is to the patient illness script the higher it should end up on your differential. Your final differential has three tiers:
Tier 1: Diseases that are those most likely belong here. The epidemiology, time course and clinical presentation are concordant with the patient illness script.
- Tier 1e: Diseases on tier 1e are diagnoses that may be less likely than tier 1 but if missed will cause immediate and serious harm. These are dangerous diagnoses! The “e” in this tier stands for “emergency” and diseases on this list must be ruled out, even if they are less likely.
Tier 2: Diseases that have some similarities to the patient illness script but aren’t a perfect fit belong here. They are still possible but less likely than tier 1 diagnoses.
Tier 3: Diseases on your original list that do not fit the illness script. They may have excluding features or lack key features.
4. Use your tiered differential to determine what tests to order
The tier that a possible diagnosis falls into will help you decide what tests to order to determine the final diagnosis. Think of each tier as a pretest probability.
Tier 1 diagnoses have a “high” pretest probability
- No tests or few tests may be needed to convince you that a diagnosis in tier 1 is responsible for the patient’s presentation and similarly you would need very convincing information to take it off your list completely.
- These and Tier 1e diagnoses should drive your initial investigations
Tier 1e diagnoses may have varying pretest probability
- These diseases may or may not be likely but regardless tests with high sensitivity are needed to rule them out (remember “SnOUT”)
Tier 2 diagnoses have a “medium” pretest probability
- Diseases on this tier are tricky. You really have to evaluate the sensitivity, specificity and information given from each test. You may need a few good tests get from a “medium” pretest probability to final diagnosis.
Tier 3 diagnoses have a “low” pretest probability
- Even relatively good tests may not move diagnoses from Tier 3 up to tier 1. The positive result that you get might be due to chance. Investigating these diagnoses should be a last resort.
These four tips won’t magically turn a medical student into an expert at clinical reasoning but they might serve to expose the way that experts think. They offer concrete ways for medical students to approach clinical reasoning and a common language for experts to discuss their approach with their learners.
For more information about MOOCs and why explicit discussion of clinical reasoning is important, see these links.
- Many MOOCs are available at Coursera on everything from jazz improvisation, to biostatistics, to the principles of cardiopulmonary resuscitation and everything in between.
- “Teaching Clinical Reasoning” by Michelle Lin (@M_Lin) at Academic Life In Emergency Medicine
- “Teaching Clinical Reasoning” by Nadim Lalani (@ERMentor)
- “Thinking about teaching thinking” by Robert Centor (@medrants)
- Lauren Westafer’s (@LWestafer) great medical student thoughts on “Thinking About Thinking” and “Metacognition for the Pragmatist”
- For a review of the Course and thoughts about how it might be applied to Facilitated Group Learning at Queen’s see Eve’s blog posts here and here.
- MOOC’s as they relate to Free Open Access Medical Education, “What is a MOOC” by Chris Nickson (@precordialthump)
MedEdPortal: a great resource
MedEdPortal is a repository of online modules, and other tools that are vigourously peer-reviewed and suitable for medical and other health professions education. To find out more about this great resource, to to their short video: www.mededportal.org/about
Exam Wrappers: A novel way to review exams
Here’s a new and very interesting tool called “Exam Wrappers” that you can add to your exam review after mid-terms and even finals. It enables students to think more carefully about their studying and learning. It is from a chapter by Marsha C. Lovett, (2013) Chapter 2, in Make Exams Worth More Than the Grade, in the book, Using Reflection and Metacognition to Improve Student Learning, edited by Matthew Kaplan, et al, Stylus Publishing, Sterling, Virginia.,
This is a technique that engages students in reflection, metacognition (learning to learn) and self-regulated learning. Prof. Lovett’s approach was to “build metacognitive practice around exams” and in so doing satisfy the many constraints that challenge metacognition in a curriculum.
What are Exam Wrappers?
Exam wrappers are short activities that direct students to review their performance (and the instructor’s feedback) on an exam, with an eye toward adapting their future learning. Exam wrappers ask students three kinds of questions: How did they prepare for the exam? What kind of errors did they make on the exam? What could they do differently next time?
Prof. Lovett provides examples in Appendices A1 and A2 of her book. Here is a summary of her work on the three questions above:
1. How did you prepare for the exam?
Benefits of this question:
• Challenge student to confront study process and implicit or explicit choices they made about their studying
• Asks themselves if they studied enough or with enough lead time
• Focusing on diverse study methods (reviewing notes, solving practice problems, rereading the textbook) points out that there are many approaches they can use for next time
2. What kinds of errors did you make?
Benefits of this question:
• Challenges students to move beyond marks: with high marks, they tend to be relieved and move on; with low marks, they may leave the “painful event behind.”
• Allows opportunity to analyze in greater depth, e.g. considering level of difficulty of the questions they may have had problems with, looking for patterns in types of errors.
• Gives them a lexicon re. self-assessment: e.g. “Did they read the question carefully? Did they have trouble setting up the problem? Did they fail to understand the concepts involved?” Or “Did they make mistakes on the required math, chemistry, physiology, anatomy, etc.?”
3. How should you study for the next exam?
Benefits of this question:
• Ties responses from #1 and 2 together
• “A key goal of the third type of question is to help students see the association between their study choices and their exam performance so they can better predict what study strategies will be effective in the future.” (Lovett, 2013)
• Asks students to attribute their problems from #2 to some specific study errors, or look back at #1 and #2 and ask how they would specifically prepare differently.
Benefits of exam wrappers:
1. Impinge minimally on class time.
2. Are as easily completed by students within the time they are willing to invest.
3. Are easily adaptable. (Faculty can add their own concerns in #2, for example, asking about test anxiety or other issues). Can be used with other types of graded assessments.
4. Are repeatable yet flexible. (can add new questions or change questions slightly to keep things “fresh”)
5. Exercise the key metacognitive skills instructors want their students to learn: assess strengths and weaknesses, identify strategies for improvement, and generate adjustments.
Steps for Exam Wrappers
1. Hand back exams.
2. Assign “Wrapper” with questions.
3. Students complete, either during the exam review, for homework, or online (non-graded but required element). Students can also share study techniques with classmates.
4. Instructor collects and reviews to gain new knowledge of student needs, and patterns of behavior (e.g. amount of hours spent studying)
5. Hand back wrappers, or remind students about them as they might begin studying for another exam.
6. Repeat for subsequent exams (you can streamline a wrapper for a later exam, eg.)
Thanks to the Tomorrow’s Professor Digest for this idea from Prof. Sharon Lovett.
New resource for electrocardiogram interpretation
Queen’s own Dr. Adrian Baranchuk is the Editor of the newly published Atlas of Advanced Electrocardiogram Interpretation. With Contributing Editors Drs. Hoshiar Abdollah, Damian Redfearn, and Christopher Simpson, one could call this “The Queen’s Atlas of ECG”! The atlas is “a practical guide to recognizing and analysing a wide spectrum of cardiac conditions.” There is free access for the next 25 days at http://asandk.com/ecg/ It’s available for PC and Mac.
The atlas provides:
Tracings, data, descriptions, interpretations, and tips from the expert contributors
Straightforward and consistent style encourages logical and step-wise ECG interpretation, as well as rapid recognition based on the study of repeated patterns
There are 100 “real world” tracings with contributions from 100 of the world’s leading cardiologists and electrocardiographers.
Cases are divided into 12 chapters covering key disorders and abnormalities.
Bibliographic information is provided to facilitate further reading.
Images from each chapter are available to download to your computer for use as teaching and learning aids.
A great teaching idea: The 3-2-1 Assignment
Here is a great teaching idea from Dr.Geraldine Van Gyn, professor in the School of Exercise Science at the University of Victoria.
She writes in the e-zine Faculty Focus about the “Purposeful Reading Assignment” or the “3-2-1” assignment.
It goes like this:
Requirement 1: Students read what is assigned, then choose and describe the three most important aspects (concepts, issues, factual information, etc.) of the reading, justifying their choices.
Requirement 2: Students identify two aspects of the reading they don’t understand, and briefly discuss why these confusing aspects interfered with their general understanding of the reading. Although students may identify more than two confusing elements, they must put them in priority order and limit themselves to the two most important ones. Students seldom understand everything in a reading and, knowing that they must complete this part of the assignment, will reflect on their level of understanding of all the reading’s content.
Requirement 3: Students pose a question to the text’s author, the answer to which should go beyond the reading content and does not reflect the areas of confusion in requirement 2. The question reflects students’ curiosity about the topic and reveals what they think are the implications or applications of the reading content. This last requirement lets you know how well students understood the article’s intention.
This would be a great assignment to try in Health Sciences classes. In Meds, perhaps we could modify it so that the students share with their group Requirement 2 and hand in Requirement 3 for feedback. We could use an e-template to complete these and allow faculty to give quick e-feedback.
Prof. Van Gyn reports that in analyzing her mid-and end of term feedback, The purposeful, 3-2-1 reading report is the most frequently cited in all courses (mid-term =72% of all students, n= 549, end of term = 65% of students, n= 513) as being of greatest benefit to the students’ learning.
If you’d like to learn more about 3-2-1, just drop me a line.
Van Gyn, Geraldine. It’s The Little Assignment with the Big Impact: Reading, Writing, Critical Reflection, and Meaningful Discussion. Faculty Focus May 6, 2013.
Case Reports Database
Dr. Kanji Nakatsu shared this resource with us recently. It’s a bank of Case Reports, from Biomed Central and supplemented by the Journal of Medical Case Reports. It is searchable and freely accessible. This is a resource for physicians, but may also be used in medical education. “By bringing similar case reports together, through the Cases Database, researchers and clinicians can start to look for new knowledge – new associations, new side effects, new thoughts about disease processes, new understandings about the impact of disease on our patients and our communities.”
Access it by going to
Updated Faculty Resources Community Available
The newly-updated Faculty Resources Community is now available in MEdTech Central. This online resource contains great teaching and assessment ideas, highlights of Curriculum Committee, notes and slides from the retreats, and more.
The resource material available includes refresher instructions on the audio-visual equipment in teaching theatres 132 and 032 (including a map of the numbered student microphones), e-learning resources and links to the small group learning community.
This Faculty Resource Community is open to all faculty at the School of Medicine. For more information, please contact Sheila Pinchin (email@example.com) or Theresa Suart (firstname.lastname@example.org).
Using the IDEAL banks of questions for your assessments
Obtaining IDEAL Consortium Questions
Queen’s School of Medicine has joined the IDEAL Consortium, an international assessment item-sharing collaboration among Schools of Medicine. The Consortium has 27 member schools from 11 countries. Queen’s and UBC are currently the only Canadian members.
The IDEAL Restricted Question Bank contains over 20,625 assessment items including 17,109 MCQs, 539 short-answer questions and 461 OSCE stations. Collectively, members contribute about 4,000 new questions to the restricted and non-restricted question banks annually.
Restricted Bank: Please contact your Curricular Coordinator to request sets of restricted bank questions in your subject area or questions on particular topics. (Zdenka Ko for Year 1, Tara Hartman for Year 2, Jane Gordon for Clerkship Rotations and Candace Trott for “C” courses in clerkship.) Restricted bank questions need to be kept secure, so they can only be used on final examinations. A Word document containing the questions (as well as their answers and “item numbers”) will be couriered to you, or you can request that a secure MEdTech community be created for you to share restricted questions with other faculty members in your course.
To use restricted questions on final exams, simply provide your Curricular Coordinator with the item number of each question and the order in which you would like the questions to appear on the final exam. If you are sharing restricted questions via a secure MEdTech community, you can copy and paste your question selections into a Word document and upload it to the Curriculum Coordinator’s folder in the secure community. It is important that the IDEAL restricted bank questions not be emailed except in password-protected Word files. The restricted questions must not be viewed by students except during the writing of final exams.
You can specify edits to any of the IDEAL items – including OSCE stations. If you edit the items yourself, please highlight your edits so that your Curriculum Coordinator can transfer the edits to the local copy of the IDEAL bank.
The old LXR bank contained many duplicate and triplicate questions, so please let your Curriculum Coordinator know the origin of each exam question (IDEAL? LXR? Original? From a colleague?) We especially need to know, for copyright and item submission reasons, if any questions did not originate at Queen’s. Questions that did not originate at Queen’s will be marked, “Do not submit to IDEAL”, but can be stored in the local copy of the IDEAL bank and used on Queen’s exams.
Unrestricted Bank: Unrestricted bank items can be used in online quizzes, in clicker sessions, on midterms etc. Students can have full access to all unrestricted bank questions. Currently the MEdTech team is creating an interface for the unrestricted bank so that faculty members will have full access to the questions. At present, requests for emailed sets of unrestricted bank question sets can be sent to Catherine Isaacs (email@example.com).
Great Health Care Requires Great Medical Educators
Education is not an industrial process; it is a human one.
In the Dec. 10 edition of The Atlantic Monthly, Richard Gunderman, MD. PhD., examines different sets of components of excellence in medical education: curriculum, instructional methods, and assessment techniques AND creativity, commitment, and inspiration of medical educators. He focuses on the critical importance of fostering a generation of medical educators through support of medical education. For the article see
What do p and R-values mean anyhow? : Understanding how to interpret multiple-choice test scores using statistics.
Have you ever wondered whether or not your multiple-choice questions (MCQs) are too easy? The answer to this question can be found in the p-values or item difficulty: the percentage of students who answered correctly. The difficulty of a MCQ can range from 0.00 to 1.00; the higher the p-value, the easier the question. What we should be concerned with are high difficulty questions with p-values less than 0.3.
Have you ever wondered which questions tricked students who otherwise performed well on a test overall? The R-value or item discrimination looks at the relationship between how well students performed on a question and their total score. Item discrimination indicates students who know the tested material and those who do not. The higher the R-value, the more discriminating the test question. We should try to remove questions on the test with discrimination values (R-values) near or less than 0.3. This is because students who did poorly on the test did better on this question than students who performed better overall.
Did you Know?
Multiple-choice questions that use words in the stem such as best, most, first, or most correct require higher-level thinking but often confuse students because they are ambiguously worded. Our students have struggled lately with ambiguity in the wording of MCQs on RATs and exams such as “Which is the most likely….”. They assume “most likely” to be “most common”, whereas the most likely answer could be an uncommon situation. It’s important to word the question clearly so that students are not confused. So for example, the question could state, “In light of the clinical information provided above, which diagnosis would you make?
You can also ask students about “most common”, “most concerning”, or “what is the first test you would perform” etc. but it is always good to anchor these stems by referring to the data presented previously. Then the key is to require them to choose, evaluate, interpret, judge, infer from data, solve problems, and apply principles.
Did you Know?
The Student Assessment Committee has posted several articles, checklists and PowerPoint slides to assist you with Multiple Choice Questions.
For more guidance on writing high-quality multiple-choice questions refer to MCQ Guidelines and Writing MCQ’s in School of Medicine Faculty and Staff Resources at:
Queen’s School of Medicine: Faculty and Staff Resources.