Congratulations to Drs. Ted Ashbury and Heather Murray

Congratulations to Dr. Ted Ashbury (Anesthesia) and Dr. Heather Murray (Emergency), both of whom are very involved in Medical Education!  They have been awarded the Canadian Association for Medical Education (CAME) Certificate of Merit, which promotes, recognizes and rewards faculty committed to medical education in Canadian medical schools.

In Undergraduate Medical Education, Ted has developed and is the Course Director for Professional Foundations 2 and 3, pre-clerkship courses which teach about the intrinsic or non-medical expert roles of a physician.  He has also served as the Competency Lead for the Professionalism Role since the inception of the Competency Lead Role.  Ted has also served as a founding member of the UGME Curriculum Committee.

In Undergraduate Medical Education, Heather developed and is the Course Director of Critical Appraisal, Research and Learning (CARL) and the Critical Enquiry Course in pre-clerkship UGME.  She is also the Competency Lead for the Scholar role from years 1-4 and serves on the UGME Curriculum Committee.

These deserving colleagues will be recognized at the upcoming CAME Annual General Meeting which is held in conjunction with the Canadian Conference on Medical Education (CCME) in Québec, QC on Sunday, April 21, 2013 at 17:30 at the Hilton Hotel Québec. Please join us in congratulating these individuals for their commitment to medical education in Canada.

Posted on

Many thanks for tremendous work: Farewell but not goodbye

Dr. Stephanie Baxter,  has moved from her position as Co-Course Director   for Neurology and Ophthalmology in Undergraduate Medical Education to serve  as the new Residency Program Director for the Department of  Ophthalmology.  She has also therefore left her position on the UGME Teaching and Learning Committee of which she was an inaugural member.

It’s difficult to express all that Stephanie has quietly accomplished in undergraduate medicine–from piloting the extremely successful Ophthalmology Skills Fair to complete course revision as she acted as one of the first exemplars of creating balanced teaching methods.  Stephanie has served the Teaching and Learning Committee well for 5 years, representing clinical teaching and supporting initiatives through her own teaching practice.

Perhaps most telling, however, is Stephanie’s contribution to student learning. She is the recipient of the 2011 Aesculapian Society’s Lectureship Award, and has already made an impact with her work in teaching residents, winning the Garth Taylor Resident Teaching Award of 2012, both attesting to the way Stephanie is able to interact with students to help them learn.

We wish Stephanie well in her work in Post Graduate Medical Education, and hope that our undergraduate students will still have the benefit of her teaching.  Many thanks Stephanie, for all your tremendous work!

Posted on

What do p and R-values mean anyhow? : Understanding how to interpret multiple-choice test scores using statistics.

Have you ever wondered whether or not your multiple-choice questions (MCQs) are too easy? The answer to this question can be found in the p-values or item difficulty: the percentage of students who answered correctly. The difficulty of a MCQ can range from 0.00 to 1.00; the higher the p-value, the easier the question. What we should be concerned with are high difficulty questions with p-values less than 0.3.

Have you ever wondered which questions tricked students who otherwise performed well on a test overall? The R-value or item discrimination looks at the relationship between how well students performed on a question and their total score. Item discrimination indicates students who know the tested material and those who do not. The higher the R-value, the more discriminating the test question. We should try to remove questions on the test with discrimination values (R-values) near or less than 0.3. This is because students who did poorly on the test did better on this question than students who performed better overall.

Did you Know?

Multiple-choice questions that use words in the stem such as best, most, first, or most correct require higher-level thinking but often confuse students because they are ambiguously worded. Our students have struggled lately with ambiguity in the wording of MCQs on RATs and exams such as “Which is the most likely….”. They assume “most likely” to be “most common”, whereas the most likely answer could be an uncommon situation. It’s important to word the question clearly so that students are not confused. So for example, the question could state, “In light of the clinical information provided above, which diagnosis would you make?

You can also ask students about “most common”, “most concerning”, or “what is the first test you would perform” etc. but it is always good to anchor these stems by referring to the data presented previously. Then the key is to require them to choose, evaluate, interpret, judge, infer from data, solve problems, and apply principles.

Did you Know?

The Student Assessment Committee has posted several articles, checklists and PowerPoint slides to assist you with Multiple Choice Questions.

For more guidance on writing high-quality multiple-choice questions refer to MCQ Guidelines and Writing MCQ’s in School of Medicine Faculty and Staff Resources at:

http://meds.queensu.ca/home/faculty_staff_resources/assessment_resources

 

References

http://ctl.utexas.edu/programs-and-services/scanning/interpreting-results/

http://www.washington.edu/oea/services/scanning_scoring/scoring/item_analysis.html

Queen’s School of Medicine: Faculty and Staff Resources.
http://meds.queensu.ca/home/faculty_staff_resources/assessment_resources

Posted on

Translating students’ comments on course evaluations

Navigating students’ comments could be one of the most challenging aspects of interpreting course evaluations. In an article in Innovative Higher Education, Linda Hodges and Katherine Stanton (2007) suggest using these comments as “windows into the process of student learning and intellectual development” rather than as reviews of “how they have been entertained” by an instructor.

Hodges is Director of the Harold W. McGraw, Jr. Center for Teaching and Learning at Princeton University; Stanton is the center’s assistant director. They point out that sometimes students’ comments stem from “students’ expectations of or prior experiences with college classes” that “entail teachers standing in front of the room ‘telling.’”

For example, is a comment like “I did not learn in this class because the teacher did not teach” evidence of a lack of effective teaching, or evidence that the style of teaching – including lots of team-based work – wasn’t what the student was expecting? Reframing student comments in this light can ultimately help improve teaching, Hodges and Stanton suggest.

“We may see our evaluations less as judgments of our performance and more as insight into our students’ intellectual growth—insight that may engage us in intellectual growth as teachers and scholars.”

Hodges, L.C., and Stanton, K. (2007). “Translating comments on student evaluations into the language of learning” in Innovative Higher Education 31:279-286.

 Permalink: http://resolver.scholarsportal.info/resolve/07425627/v31i0005/279_tcoseitlol

 

Posted on

Can Students Multitask?

You may have noticed an occasional student referring to his Facebook page, or her ipod or ipad while also apparently listening to your lecture, or working with her/his team-mates in small group learning.  They are multitasking, as part of the “M” generation.  But are they really multitasking?  And is it working for them as successful learners?  Dr. MaryEllen Weimer has collected evidence in her article that students compromise their learning by multitasking and suggests we present them with the evidence to help them re-evaluate their approach to learning.  For a synopsis of the research she has collected, go to Faculty Focus.

Posted on

RATs (Readiness Assessment Tests): To time or not to time?

A very recent study at Regis University School of Pharmacy, which won best poster at the Team Based Learning Cooperative’s Annual Meeting, determined that students preferred timed tests vs. tests with no time limits.  Students also indicated that they preferred to be told that five minutes remained once fifty percent of the class had completed the iRAT (variable time limit) vs. being informed of the ttal time allotted to complete the iRAT (defined time limit).  (Richetti, C. et al, 2012)

This has value for us to explore.  The investigators were addressing the problem of “down time” in RATs where some students have finished and others have not.  Of the students surveyed (74) 97% responded to suggest that using the strategy of variable time limits was useful, and not likely to induce as much anxiety as the defined time limit.

Why not try this method with your students when you give a RAT?  Carefully observe, (and if there is another facilitator with you, use her/his observations) to determine when half the class has completed the RAT.  Some of our faculty ask students to raise a hand when the group or individual has finished a RAT, or use some other signal like a green card attached on their group clipboard.  That will enable you to give a “5 minute warning” to the rest of the class.

If you try this, please let us know what your findings are.

By the way, the investigators also found some additional benefits of timing RATs from their survey:

Our survey determined that the timing of iRATs decreases “down-time”, helps students increase their confidence in their ability to perform well on timed exams (e.g. board exams) and provides more time to focus on applications[tasks]. While students reported an increase in anxiety caused by the timing of iRATs, they reported they preferred the timed iRATs over the iRATs that were not timed.

Richetti, Charlotte, Brunner, Jason M., Fete, Matthew, Luckey, Stephen. & Nelson, Michael. (2012). “Student Perceptions on the Value of Timing Readiness Assurance Tests.  Poster presented at TBLC Annual Meeting, St. Petersburg.

 

Posted on

RATs (Readiness Assessment Tests): To time or not to time?

A very recent study at Regis University School of Pharmacy, which won best poster at the Team Based Learning Cooperative’s Annual Meeting, determined that students preferred timed tests vs. tests with no time limits.  Students also indicated that they preferred to be told that five minutes remained once fifty percent of the class had completed the iRAT (variable time limit) vs. being informed of the ttal time allotted to complete the iRAT (defined time limit).  (Richetti, C. et al, 2012)

This has value for us to explore.  The investigators were addressing the problem of “down time” in RATs where some students have finished and others have not.  Of the students surveyed (74) 97% responded to suggest that using the strategy of variable time limits was useful, and not likely to induce as much anxiety as the defined time limit.

Why not try this method with your students when you give a RAT?  Carefully observe, (and if there is another facilitator with you, use her/his observations) to determine when half the class has completed the RAT.  Some of our faculty ask students to raise a hand when the group or individual has finished a RAT, or use some other signal like a green card attached on their group clipboard.  That will enable you to give a “5 minute warning” to the rest of the class.

If you try this, please let us know what your findings are.

By the way, the investigators also found some additional benefits of timing RATs from their survey:

Our survey determined that the timing of iRATs decreases “down-time”, helps students increase their confidence in their ability to perform well on timed exams (e.g. board exams) and provides more time to focus on applications[tasks]. While students reported an increase in anxiety caused by the timing of iRATs, they reported they preferred the timed iRATs over the iRATs that were not timed.

Richetti, Charlotte, Brunner, Jason M., Fete, Matthew, Luckey, Stephen. & Nelson, Michael. (2012). “Student Perceptions on the Value of Timing Readiness Assurance Tests.  Poster presented at TBLC Annual Meeting, St. Petersburg.

 

Posted on