Assessment and Evaluation



Queen’s PGME is emerging as a national leader in assessment and evaluation. We share some of the exciting activities that have occurred over the past year.

Preparing Assessment and Evaluation Professionals


Our Assessment and Evaluation Intern, Ulemu Luhanga has been working with Queen’s PGME for the past two years on various projects while at the same time conducting her Doctoral Research in Pediatrics. We are very proud to announce that she has accepted a position with Emory University in Atlanta as an Educational Researcher in Graduate Medical Education. Although we are very sad to see her go, the fact that she secured a position prior to even completing her Doctoral Studies is a testament to the quality of learning her internship with PGME afforded. We look forward to continued collaboration with her as she establishes herself in her new position.

Ayca Toprak, one of our fourth year surgery residents also recently completed her Masters Studies in Assessment and Evaluation. Ayca’s research focused on building a validity argument for the Surgical Procedure Rubric. Her work is featured in an up-coming article in the American Journal of Surgery.

Most recently we have engaged in a collaborative relationship with members of the newly created Assessment and Evaluation Unit at McGill University. Drawing on our mutual strengths this promises to be very productive research and development partnership. Plans are currently underway to establish a national Assessment and Evaluation group with the support of the RCPSC’s Associate Director of Assessment, Farhan Bhanji.

Assessment, Promotion, and Appeals Policy

Anticipating APA policy implications the move to CBME would demand, we have closely monitored the emergent needs of our Family Medicine program as they navigated their transition to CBME over the past six years. We also conducted an environmental scan of Canadian PGME assessment policy. Based on our practical experience, research, and the Royal College’s CBD initiative we have identifed key policy issues associated with CBME and have plans to revise our policy to align with these issues.

Essentially, assessment is the back bone of CBME and must be strategically utilized to monitor on-going progress and inform high stakes decision making. Some key APA policy required to fully leverage the power of programatic assessment in CBME include but are not limited to,      

 

  • Dedicated academic advisors who establish longitudinal relationships with residents are required.
  • Academic advising meetings must occur on a regualrly scheduled basis (e.g., 4 month intervals) to review overall progress.
  • Learning plan templates must guide the academic review process wherein residents review in advance, assessment information collected over the previous learning cycle, set learning goals for the up-coming cycle, and determine the kind of assessment information required to provide evidence of goal attainment.
  • Academic advisors must be charged with reviewing learning plans in advance of meetings to validate residents’ interpretaions of performance patterns and discuss discrepancies, review future learning goals and required evidence of achievement, and where necessary and possible adjust scheduled learning experiences to better align with resident learning needs.
  • Residents must be empowered to take active responsibility for assessment by negotiating the focus of assessment with clinical perceptors based on their learning plans, explicitly assuming responsibility for sharing their own performance information.
  • Academic advisors must be empowered to trigger comprehensive resident file reviews as patterns of low performance or failure to progress emerge.
  • Competency committees must be created and assume responsibility for discerning whether a resident’s performance pattern constitutes a perpetual pattern of low performance requiring remediation or an upward trajectory that could be resolved with an enhanced learning program.
  • Competency committees also assume responsibility for declarations of stage specific RC EPA achievement and readiness to transition along the four stages of training in the comptence continuum based on recommendations from academic advisors and a comprehensive review of pertinent assessment evidence.
In our policy reform process it has become abundantly clear that shifts in assessment culture happen over time and are promoted through institutional values and processes expressed in APA policy. Ultimately, we conceptualize policy as a catalyst for change and have successful leveraged this approach since 2010. Our progress in terms of supporting the development of programmatic assessment systems across our specialty programs is evidence of our success in this regard. Our efforts are on-going as we navigate the transition to CBME. We are particularly well positioned to bring together our practical experience, assessment research expertise, and direction from the RCPSC with the potential to inform APA policy guidelines to support others on the path to realizing CBD.

Programmatic Assessment at Queen’s


Over the four years since our last accreditation we have continued to enhance and expand our assessment and evaluation capacity. Our explicit goal has been the development of programmatic assessment systems across all specialty programs. We identify four categories of assessment tools that together contribute to the creation of programmatic assessment systems including: Point of Care, Observation, Testing, and Other. A recently conducted scan of programmatic assessment at Queen’s revealed significant up-take across all categories of assessment tools.

Overall, only two programs for which ‘point of care assessments’ apply implement no assessments of this type at this time, however both are in the process of developing these tools. Five programs have one in use, while another two programs use two of these tools. Eleven programs use three or four of these types of assessments, one program has five and another six tools in use.

In terms of ‘observational tools’ (e.g., presentation rubrics, OSCEs, Simulations, Observed history and physicals, and Stacers), thirteen programs have at least one or two observational tools, eleven have three or four in use, one program is currently developing such a tool and only three programs have yet to adopt these. With regard to ‘Testing’ (e.g., shelf-examination, short-answer and multiple choice exams developed in house, oral examination), eight programs use at least one form of testing, six programs use two forms, seven programs use three forms, and two programs use four forms of testing. Only four programs do not as yet have some form of testing on a regular basis, one of which is currently being redesigned. In addition, fourteen programs have procedural logs and three other programs use consult/referral note assessment.

In summary, fourteen specialty programs have highly developed programmatic assessment systems with between seven and thirteen assessment practices in use. Eleven other programs have established solid assessment protocols that rely on four to six assessment tools to monitor resident learning and of those, four programs are currently engaged in developing additional tools. Only three programs use fewer than four assessment tools and are all in the process of developing additional assessment opportunities for their residents.   

Technology Enhanced Assessment and Evaluation

Educational leaders continue to work closely with members of the Education Technology Unit to adapt existing functionalities and develop additional ones to support PGME’s move to CBME. A major focus over the past year has been developing the system to replace one45. Several programs are currently piloting the new system, with rollout to all programs scheduled to occur over the next academic year.





Dr. Laura McEwen is the Director, Assessment and Evaluation for Postgraduate Medical Education at Queen's University.  She can be contacted at laura.mcewen@queensu.ca.