Seeing through student evaluations of teaching (SETs)

Session Type

Poster

Room

The Great Hall, Somerville House (room 3326)

Start Date

17-7-2025 4:00 PM

End Date

17-7-2025 6:00 PM

Keywords

SET, bias, nonresponse, student feedback, teaching effectiveness, course evaluations

Primary Threads

Teaching and Learning Science

Abstract

Student evaluations of teaching (SETs) are widely used in academia as a formative assessment of teaching and as a data source for evaluation and institutional quality assurance efforts (Spooren et al., 2013). However, online SETs often suffer from low response rates and are subject to various biases, raising questions about their validity and reliability as measures of teaching effectiveness (Spooren et al., 2013 & Kreitzer & Sweet-Cushman, 2022).

When we launched a new blended format of an introductory statistics for life sciences course last year, we sought additional feedback from our students on the quality of their learning experience at the end of the term to supplement the feedback we would later receive through SETs. Our end-of-term student survey was deemed exempt from research ethics review by the University of Toronto Social Sciences, Humanities, and Education Research Ethics Board.

Four questions on our end-of-term student survey were identical to items that appeared on the SET that term. Since the survey questions were included on a reflection activity that counted toward course participation, it had a higher response rate than the SET (86% versus 34%). We observed differences in student responses for the four questions, although the magnitude and direction of these differences varied by question.

This poster will compare results for each question between the end-of-term survey and SET. Although we do not attempt to explore the validity of SETs as a measure of teaching effectiveness here, the discrepancies in feedback collected from students in two different ways offers an interesting glimpse into the nature of SET bias, prompting a discussion of sustainable approaches to evaluate teaching effectiveness.

Elements of Engagement

Participants will have the opportunity to ask questions and share their own experiences with SETs, student feedback on teaching and other measures of teaching effectiveness.

This document is currently not available here.

Share

COinS
 
Jul 17th, 4:00 PM Jul 17th, 6:00 PM

Seeing through student evaluations of teaching (SETs)

The Great Hall, Somerville House (room 3326)

Student evaluations of teaching (SETs) are widely used in academia as a formative assessment of teaching and as a data source for evaluation and institutional quality assurance efforts (Spooren et al., 2013). However, online SETs often suffer from low response rates and are subject to various biases, raising questions about their validity and reliability as measures of teaching effectiveness (Spooren et al., 2013 & Kreitzer & Sweet-Cushman, 2022).

When we launched a new blended format of an introductory statistics for life sciences course last year, we sought additional feedback from our students on the quality of their learning experience at the end of the term to supplement the feedback we would later receive through SETs. Our end-of-term student survey was deemed exempt from research ethics review by the University of Toronto Social Sciences, Humanities, and Education Research Ethics Board.

Four questions on our end-of-term student survey were identical to items that appeared on the SET that term. Since the survey questions were included on a reflection activity that counted toward course participation, it had a higher response rate than the SET (86% versus 34%). We observed differences in student responses for the four questions, although the magnitude and direction of these differences varied by question.

This poster will compare results for each question between the end-of-term survey and SET. Although we do not attempt to explore the validity of SETs as a measure of teaching effectiveness here, the discrepancies in feedback collected from students in two different ways offers an interesting glimpse into the nature of SET bias, prompting a discussion of sustainable approaches to evaluate teaching effectiveness.