This is an article I wrote for my Faculty Association‘s newsletter. Some of it is specific to my institution, but the overall principles hold for many undergraduate instructors.
All semester long, our students listen to our voices (or watch our hands, in the case of ASL courses). And at the end of the semester, it has long been our tradition to ask students to voice their experience in the course, in the form of a course evaluation questionnaire. More than a decade ago, one of my students took the opportunity to make it clear that listening to me had not been a positive experience for them, by commenting, “She has an annoying voice.” I’ve often wondered what that student’s goal was in writing that: Did they think I should have acquired a different voice to make their experience more pleasant? Would they have learned more or better if my voice were different? Did they want my Chair to discipline me for my voice? The only plausible goal I can think of is that they wanted to hurt me. If so, they achieved some modest success – after all, I still remember their barb all these years later. But I know that my white skin and monolingual anglophone Ontario accent shield me from the hate that many colleagues receive in this medium, anonymized and typed up in a formal report issued by the employer. The bigger question, more important than what that student hoped to achieve, is what we hope to achieve: what is our goal in surveying students at the end of each course?
What do we want to know from student surveys?
Ostensibly, the goal of these surveys is to measure Effective Teaching, or, in the case of Teaching Professors, Excellent Teaching. According to the Tenure & Promotion Policy, “A candidate for re-appointment, tenure and/or promotion must demonstrate that he or she is an effective teacher,” and, “A candidate for permanence must demonstrate that he or she is an excellent teacher”. And SPS B1 Procedures for the Assessment of Teaching opens by stipulating that “Effective teaching is a condition for […] salary increments based on merit.” Our existing policies require the numerical data from student surveys to be considered in determining whether an instructor’s teaching is effective, which is necessary for making decisions about tenure, permanence, promotion, and merit.
Who could argue with the goal of ensuring effective teaching? McMaster’s world-class reputation for creativity, innovation and excellence rests in no small part on the quality of our teaching, and it is clear from the immense efforts we have invested in redesigning courses for pandemic virtual learning conditions that McMaster faculty are deeply invested in providing high-quality learning experiences to our students.
What can we know from student surveys?
The problem is, the numerical scores from student surveys have very little relationship to teaching effectiveness. A large and growing literature has repeatedly found that these surveys are strongly biased by factors such as the instructor’s race, accent, and perceived gender and by students’ grade expectations. So convincing is the evidence that a 2018 decision by arbitrator William Kaplan established the precedent that “averages [of survey responses] establish nothing relevant or useful about teaching effectiveness”, a ruling that required Ryerson University to stop using survey results for promotion or tenure decisions. And in the time since that ruling, pandemic teaching and learning conditions have introduced even more factors that affect students’ learning experiences, such as the quality of their instructor’s home internet connection or their dislike of remote proctoring.
According to our policies, we value effective and excellent teaching. According to the university’s vision statement, we value “excellence, inclusion and community”. But over the years we’ve built a system that calculates a faculty member’s annual merit and makes high-stakes decisions about tenure, permanence and promotion using a number that not only does not measure effectiveness or excellence, but actively works against promoting inclusion and community by reinforcing existing hierarchies of race and gender. The system also does real harm to equity-seeking members of our community by subjecting them to anonymous hateful comments that are irrelevant to their teaching.
If we claim to offer excellent teaching, we have a responsibility to listen to what students say about their learning. If we claim to offer excellent teaching, we have a responsibility to avoid relying on invalid, biased data as evidence.
What has MUFA done about it?
At the time of the Kaplan ruling, MUFA began collaborating with scholars at the MacPherson Institute to develop innovative and equitable ways to observe teaching effectiveness. A 2019 report made initial recommendations, and an ongoing committee is developing an evidence base that will inform a redesigned system.
In the absence of an unbiased, valid tool for observing effective teaching, the members of the MUFA executive thought it imperative to attempt to mitigate the biasing effects of the current surveys. To that end, we negotiated a revision to the so-called “summative question” on the surveys. Instead of asking students their opinion of the instructor’s effectiveness, that question now asks, “Overall for this course, how would you describe your learning experience?”. Furthermore, under pandemic circumstances, we negotiated an agreement that these scores should not be used at all in the assessment of merit for 2020. We hope these temporary changes go some way to reducing the biases of the student survey process, but they were only ever meant to be short-term measures.
What still needs to be done?
Excellent teaching is too complex to be tracked by a single number that gets compared across instructors. As members at a recent MUFA Council meeting pointed out, simply replacing student survey numbers with peer scores or with student focus groups is unlikely to eliminate the bias. If our goal is excellence, or even something more achievable like effectiveness, we need complex, high-quality ways of assessing progress towards that goal. It will take time to achieve this, and we’re working on it.
Once we’ve developed a new process, we’ll need to update all our policies to make sure they’re consistent with each other. We should also build in a regular schedule for reviewing the process, to make sure we don’t inadvertently regress to overly simple, biased metrics over time.
One MUFA Council member asked whether we had taken any steps to redress historical consequences of using biased data. The MUFA executive have not discussed this specifically with respect to student survey data, but we are continuing to investigate equity issues in our members’ compensation. The 2015 salary adjustment for women faculty was one outcome of this ongoing process.
While all that work is going on, we can and should strive for continuous improvement of our teaching. And here is where listening to our students is vital. Students are the only ones who can provide first-hand data on their experiences of our courses. We could design the most rigorous, comprehensive courses in the world that still might not support our students’ learning. Within a semester, listening to our students might prompt us to move due dates, reweight assessments, or return to a topic that we thought we had finished. By listening to our students at the end of a semester, we can make improvements to the course for the next cohort of students, like converting a timed test to a take-home assignment, or reducing the number of low-stakes quizzes. It was listening to my students saying, “I don’t have the textbook. Is it required? Is it on reserve? It’s awfully expensive,” that led me to create an OER that is freely available to all learners everywhere.
If we claim to offer excellent teaching, we have a responsibility to listen to what students say about their learning. If we claim to offer excellent teaching, we have a responsibility to avoid relying on invalid, biased data as evidence. MUFA is working on ways to do both of these. We welcome your contributions to this work. To get involved with the work of the MUFA Executive, please contact mufa@mcmaster.ca .