Listening to Students’ Voices

This is an article I wrote for my Faculty Association‘s newsletter. Some of it is specific to my institution, but the overall principles hold for many undergraduate instructors.


All semester long, our students listen to our voices (or watch our hands, in the case of ASL courses). And at the end of the semester, it has long been our tradition to ask students to voice their experience in the course, in the form of a course evaluation questionnaire. More than a decade ago, one of my students took the opportunity to make it clear that listening to me had not been a positive experience for them, by commenting, “She has an annoying voice.” I’ve often wondered what that student’s goal was in writing that: Did they think I should have acquired a different voice to make their experience more pleasant? Would they have learned more or better if my voice were different? Did they want my Chair to discipline me for my voice? The only plausible goal I can think of is that they wanted to hurt me. If so, they achieved some modest success – after all, I still remember their barb all these years later. But I know that my white skin and monolingual anglophone Ontario accent shield me from the hate that many colleagues receive in this medium, anonymized and typed up in a formal report issued by the employer. The bigger question, more important than what that student hoped to achieve, is what we hope to achieve: what is our goal in surveying students at the end of each course? 

What do we want to know from student surveys?

Ostensibly, the goal of these surveys is to measure Effective Teaching, or, in the case of Teaching Professors, Excellent Teaching. According to the Tenure & Promotion Policy, “A candidate for re-appointment, tenure and/or promotion must demonstrate that he or she is an effective teacher,” and, “A candidate for permanence must demonstrate that he or she is an excellent teacher”. And SPS B1 Procedures for the Assessment of Teaching opens by stipulating that “Effective teaching is a condition for […]  salary increments based on merit.” Our existing policies require the numerical data from student surveys to be considered in determining whether an instructor’s teaching is effective, which is necessary for making decisions about tenure, permanence, promotion, and merit. 

Who could argue with the goal of ensuring effective teaching? McMaster’s world-class reputation for creativity, innovation and excellence rests in no small part on the quality of our teaching, and it is clear from the immense efforts we have invested in redesigning courses for pandemic virtual learning conditions that McMaster faculty are deeply invested in providing high-quality learning experiences to our students. 

What can we know from student surveys?

The problem is, the numerical scores from student surveys have very little relationship to teaching effectiveness. A large and growing literature has repeatedly found that these surveys are strongly biased by factors such as the instructor’s race, accent, and perceived gender and by students’ grade expectations. So convincing is the evidence that a 2018 decision by arbitrator William Kaplan established the precedent that “averages [of survey responses] establish nothing relevant or useful about teaching effectiveness”, a ruling that required Ryerson University to stop using survey results for promotion or tenure decisions. And in the time since that ruling, pandemic teaching and learning conditions have introduced even more factors that affect students’ learning experiences, such as the quality of their instructor’s home internet connection or their dislike of remote proctoring.

According to our policies, we value effective and excellent teaching. According to the university’s vision statement, we value “excellence, inclusion and community”. But over the years we’ve built a system that calculates a faculty member’s annual merit and makes high-stakes decisions about tenure, permanence and promotion using a number that not only does not measure effectiveness or excellence, but actively works against promoting inclusion and community by reinforcing existing hierarchies of race and gender. The system also does real harm to equity-seeking members of our community by subjecting them to anonymous hateful comments that are irrelevant to their teaching.

If we claim to offer excellent teaching, we have a responsibility to listen to what students say about their learning. If we claim to offer excellent teaching, we have a responsibility to avoid relying on invalid, biased data as evidence.

What has MUFA done about it? 

At the time of the Kaplan ruling, MUFA began collaborating with scholars at the MacPherson Institute to develop innovative and equitable ways to observe teaching effectiveness. A 2019 report made initial recommendations, and an ongoing committee is developing an evidence base that will inform a redesigned system. 

In the absence of an unbiased, valid tool for observing effective teaching, the members of the MUFA executive thought it imperative to attempt to mitigate the biasing effects of the current surveys. To that end, we negotiated a revision to the so-called “summative question” on the surveys. Instead of asking students their opinion of the instructor’s effectiveness, that question now asks, “Overall for this course, how would you describe your learning experience?”. Furthermore, under pandemic circumstances, we negotiated an agreement that these scores should not be used at all in the assessment of merit for 2020. We hope these temporary changes go some way to reducing the biases of the student survey process, but they were only ever meant to be short-term measures.

What still needs to be done?

Excellent teaching is too complex to be tracked by a single number that gets compared across instructors. As members at a recent MUFA Council meeting pointed out, simply replacing student survey numbers with peer scores or with student focus groups is unlikely to eliminate the bias. If our goal is excellence, or even something more achievable like effectiveness, we need complex, high-quality ways of assessing progress towards that goal. It will take time to achieve this, and we’re working on it. 

Once we’ve developed a new process, we’ll need to update all our policies to make sure they’re consistent with each other. We should also build in a regular schedule for reviewing the process, to make sure we don’t inadvertently regress to overly simple, biased metrics over time. 

One MUFA Council member asked whether we had taken any steps to redress historical consequences of using biased data. The MUFA executive have not discussed this specifically with respect to student survey data, but we are continuing to investigate equity issues in our members’ compensation. The 2015 salary adjustment for women faculty was one outcome of this ongoing process. 

While all that work is going on, we can and should strive for continuous improvement of our teaching. And here is where listening to our students is vital. Students are the only ones who can provide first-hand data on their experiences of our courses. We could design the most rigorous, comprehensive courses in the world that still might not support our students’ learning. Within a semester, listening to our students might prompt us to move due dates, reweight assessments, or return to a topic that we thought we had finished. By listening to our students at the end of a semester, we can make improvements to the course for the next cohort of students, like converting a timed test to a take-home assignment, or reducing the number of low-stakes quizzes. It was listening to my students saying, “I don’t have the textbook. Is it required? Is it on reserve? It’s awfully expensive,” that led me to create an OER that is freely available to all learners everywhere. 

If we claim to offer excellent teaching, we have a responsibility to listen to what students say about their learning. If we claim to offer excellent teaching, we have a responsibility to avoid relying on invalid, biased data as evidence. MUFA is working on ways to do both of these. We welcome your contributions to this work. To get involved with the work of the MUFA Executive, please contact mufa@mcmaster.ca . 

Researching Team-Based Learning

Since 2017 was my first experience teaching in the Active Learning Classroom, I wanted to investigate students’ experiences in the class, so that I could make evidence-informed decisions about future versions of the course. I partnered with an Honours Thesis student, Nadia Bachar, to conduct a SoTL project on students’ experiences. Our research project was approved by the McMaster Research Ethics Board and I had no access to the identities of students who participated in the research.

At the beginning of the semester we conducted an anonymous survey to gather accounts of students’ experiences of teamwork in other courses. We learned that students enjoyed collaborating with each other, but resented it when team members didn’t pull their weight, and often struggled with the logistics of arranging team meetings around everyone’s course, work, and commuting schedules.

Late in the semester, Nadia conducted two focus groups to pursue her questions of interest. At the end of the semester, we administered a second anonymous survey to supplement the data from the standard course evaluations. Nadia used qualitative data analysis software to code and interpret the data from the two surveys, her focus group, and the course evaluations.

Nadia presented her findings in a poster at the department’s Student Research Day. Overall, she found that the ALC created the conditions for team activities, and that these team activities supported students’ learning. Students reported that they learned from each other in the team assignments and projects, and that they forged new friendships and learning communities that they carried over to other courses as well.

 

Practicum Courses

Students in Linguistics and CogSciL are always eager to enrol in one of the practicum courses in their fourth year. In Ling 4SL3, students are placed with a Speech-Language Pathologist, and in Ling 4TE3, they’re matched with an ESL teacher. They spend at least 36 hours shadowing, assisting, learning, and reflecting on their practical experience in the clinic or in the classroom. Since the practicum courses began, students have always benefited from the practical experience, but when I took over the courses, I also wanted to construct a means for them to reflect on their experience in a structured way. The Learning Portfolio in PebblePad provided the perfect venue for students to complete weekly reflections and then pull their experience together into a final portfolio.

The home page for each course shows the core organization of the courses, with information for students and for supervisors:

SLP Practicum Home Page

TESL Practicum Home Page

While every student has a different experience, depending on their placement, each course is organized around the same basic structure:

  • Students complete weekly reflections guided by a template.
  • Twice during the semester, they exchange reflections with another student and comment on each others’ reflections.
  • Partway through the semester, they meet with their supervisor for a formative assessment that does not count towards their final grade. This assessment helps students to know how they’re progressing towards their goals, and offers an opportunity to refine or update some of their goals for the rest of the semester. The structured reflection for the week of the assessment uses a different template from the weekly reflections.
  • At the end of the semester, the supervisor completes a summative assessment (TESL Assessment | SLP Assessment) and each student completes a learning portfolio documenting their experience.

 

Collaborative Writing

The Society for Teaching and Learning in Higher Education (STLHE) includes a Special Interest Group for the Scholarship of Teaching and Learning (SoTL). In 2016, the SoTL Canada SIG offered the opportunity for members to take part in a set of collaborative writing groups on a set of selected SoTL topics. I was selected to be a member of the writing group on the topic of leadership in SoTL.

The writing group had a couple of meetings over Skype during the weeks leading up to the STLHE Annual meeting in June, to discuss members’ interests and refine our topic. We then spent the weekend after the STLHE meeting together, reading, writing, and discussing our paper. We left the weekend with a set of assigned tasks that each of us would contribute to the paper.

This was my first experience collaborating with scholars whom I hadn’t know before. It was challenging in many ways, as we worked to discover each other’s strengths and negotiated our several responsibilities. In the final weeks before the paper was due, I spent a good deal of time weaving my colleagues’ various contributions into a coherent whole, which the team recognized by designating me co-first author with our group’s facilitator, Janice Miller-Young.

Miller-Young, J. E., Anderson, C., Kiceniuk, D., Mooney, J., Riddell, J., Schmidt Hanbidge, A., Ward, V., Wideman, M. A., & Chick, N. (2017). Leading Up in the Scholarship of Teaching and Learning. The Canadian Journal for the Scholarship of Teaching and Learning, 8 (2).  https://doi.org/10.5206/cjsotl-rcacea.2017.2.4