“Universities are centres of learning, not teaching”

These were the words, uttered many years ago, by a former professor and teacher in response to some very demurely and deferentially expressed comments about the quality of lectures being provided in a particular medical school course. The message, directed to me and a couple of my classmates, was pretty clear. The university and faculty would provide opportunities to learn, in whatever manner they felt appropriate. It was not for us, as mere students and consumers, to question the methods. The responsibility for our education was ours.

In fact, in recent discussions with a number of my medical school contemporaries who I’m fortunate to meet with regularly, none of us could recall, during our four years of medical school, ever being asked for feedback of any kind about our educational program. If such processes existed, either internal or external to the school, they were largely invisible to the students of that time. This was certainly not unique to our school. For our generation, medical education was very much a “take it or leave it” proposition.  

This is not to say we didn’t get excellent teaching, role modelling and mentorship. We certainly did, and many of us found our inspiration for education in those early experiences. It’s also almost certainly true that many of the teachers of that time quietly observed and responded to the impact of their methods on their learners. However, the culture of the day simply did not provide methods by which the student experience could be collected and analyzed.

This rather parochial approach was not exclusive to medical education. Patients of the past were rarely, if ever, surveyed for feedback about the quality of care they received from institutions or individual physicians. Corporations and businesses largely allowed the public to “vote with their feet”. If the product wasn’t good, people wouldn’t buy it, or would simply walk away.

Clearly, things have changed.

In the business world “Consumer Satisfaction” is an industry in itself. Successful businesses aggressively seek out customer feedback because they have learned that responding to real or even perceived needs drives future spending. IBM has taken this a step further. They go beyond the need to ask questions and, instead, are building and offering services that track consumer behaviour and provide that information to service and product providers. To quote from their site: 

In health care, knowledge of the patient experience is now considered essential to a well- run institution. Hospitals are expected, through accrediting processes, to actively seek out patient perspectives

The Agency for Healthcare Research and Quality operates within the U.S. Department of Health and Human Services. Its mission is “to produce evidence to make health care safer, higher quality, more accessible, equitable, and affordable”. To quote from their site:

“Understanding patient experience is a key step in moving toward patient-centered care. By looking at various aspects of patient experience, one can assess the extent to which patients are receiving care that is respectful of and responsive to individual patient preferences, needs and values. Evaluating patient experience along with other components such as effectiveness and safety of care is essential to providing a complete picture of health care quality.” (https://www.ahrq.gov/cahps/about-cahps/patient-experience/index.html)

They make an important distinction between patient satisfaction and the patient experience. Satisfaction is a subjective impression of a patient’s interaction with an institution or individual, and is largely based on whether their personal expectations were met. The patient experience relates to gathering information, available only through patient reporting, that is relevant to determining whether certain institutional goals are being achieved.

A person test driving a new automobile, for example, is able to report on both the driving experience (acceleration, braking, ease of handling, visibility etc…) and their personal satisfaction (enjoyment, comfort, excitement) driving the car. To those designing and building the car, evaluating the driving experience allows them to determine if the equipment and concepts they developed are working as expected. Evaluating driver satisfaction determines whether the consumer is getting what was expected from the car, which may be unclear to the designers. Both are relevant to success. Both are certainly relevant to the likelihood that the consumer will purchase the car.

In medical education, the value of student feedback is widely appreciated and schools go to considerable effort and expense to collect it. In fact, the systematic collection of feedback is mandated by accreditation standards, and the evidence required to establish compliance with those standards is based largely on student feedback. The distinction between measurements of the student experience and student satisfaction is relevant, both being important goals. Systematic Program Evaluation must encompass both.

At Queen’s, we recognize that many goals of our educational program can only be fully assessed with the perspective of those actually experiencing and living the process. We also recognize that a full picture only emerges if many points of feedback are provided. We have therefore put in place many and varied opportunities for students to provide both their personal perspectives and objective observations.

After each course, students are invited (and expected) to provide feedback that consists of responses to questions exploring pre-determined educational objectives, and provision for narrative commentary in which they can elaborate or explore other aspects. Those end-of-course evaluations also provide opportunity to provide similar feedback regarding the effectiveness of teaching faculty.

We receive and carefully review the results of course-related examinations undertaken by our students, not only to gauge their learning, but also the effectiveness of the teaching and learning opportunities provided.

We anticipate and review closely the results of external examinations undertaken by our students, such as the Medical Council of Canada Part 1 and 2 examinations, and all National Board of Medical Examiners tests we utilize. These provide valuable comparators to other institutions and, to a limited extent, further feedback about our teaching effectiveness.

The Canadian Graduation Questionnaire is completed annually by all graduating medical students and provides a comprehensive review of all aspects of their educational experience. We review it in great detail, and many aspects of the CGQ are incorporated into the accreditation process.

Dr. John Drover

We have established a Program Evaluation Committee that, for the past few years has been under the leadership of Dr. John Drover. That group collects, collates and analyzes data from a variety of sources to provide an overarching analysis of our performance relative to our programmatic goals. The PEC recently released a comprehensive report, which has been passed along to the Curriculum Committee for analysis and action. I am very grateful to Dr. Drover who has generously and effectively provided PEC leadership. He is now passing that role along to Dr. Cherie Jones as she assumes her role as Assistant Dean, Academic Affairs and Programmatic Quality Assurance.  

We have also developed a number of more informal ways by which students can provide feedback.

We meet regularly with student leadership and curricular leads to get “on the fly” feedback about courses as they are taught. This often causes us to undertake adjustments or provide supplemental content even before the course is completed.

We provide numerous ways in which students can report personal distress or incidences of mistreatment at any point during their medical school experience. These range from direct contact with selected faculty members, our external counselor (who can be contacted directly and is completely segregated from faculty or assessment) or submission of reports that can be embargoed until a mutually agreed to time. All these are outlined in our policies and accessible through convenient “Red Button” on MedTech.

I have found “Town Halls” to be very valuable sources of feedback on all aspects of the MD program. These are held at least once per term with each class and consist of a few “current events” items I provide, followed by “open mike” time when students are invited to bring forward any commentary or questions they may have, about any aspect of the program. The issues that emerge and dialogue among students in attendance can be highly revealing and have certainly provoked new directions and changes over the years.

Recognizing that not all students are comfortable with speaking out, or may not wish to be identified as they raise sensitive issues, a confidential portal was established on MedTech a number of years ago. Students are able to provide their commentary in a completely anonymous fashion if they wish. My commitment is to read and consider (but not necessarily act on) all commentary provided, and to respond personally if students choose to identify themselves. To date, I have received almost 500 such submissions, about 70% of which are provided anonymously. The commentary has been thoughtfully provided and has spanned all aspects of our program and learning environment. Importantly, it often brought to light issues that had not previously emerged in any other way.

In all these ways, student feedback has become a continuing, multi-faceted component of our school and, more broadly, our learning environment. It goes beyond being a mechanical, mandated exercise and data collection. It is embedded and cultural. It is what we do. It is who we are.