Proposal Title

Examining Exams: What Makes For A Good Question?

Session Type

Presentation

Room

PAB 106

Start Date

10-7-2013 1:15 PM

Keywords

first year chemistry, item response theory

Primary Threads

Evaluation of Learning

Abstract

Perhaps the most important task confronting educators is the accurate assessment of student learning. All efforts at reform or innovation must ultimately address the question “Has student learning improved?”. In addition, the movement towards individualized learning must be able to answer the question “Has THIS student’s learning improved?” These questions will require that as educators, we develop the tools and the expertise to critically evaluate our assessment instruments. Large first-year science classes permit the application of learning measurement theories that are at the leading edge of cognitive and educational science. At the University of Guelph, first year Chemistry has approximately 2400 students who complete several multiple choice exams throughout the year in principally two courses. When a student’s grade is calculated at the end of the year, what should that grade mean? A careful study of exam questions using modern learning measurement theories begins to inform the process of exam creation and interpretation. Classical Test Theory (CTT) is commonly used to analyze the performance of both students and exam questions, but this older theory has several well-known weaknesses. Item Response Theory (IRT) is better suited to effectively characterizing the performance of both exam questions and students. I will present data on the analysis of exam questions used at Guelph over the past several years to understand the features that lead to better measures of student learning. Future pedagogical changes will be more readily accessed with these measurement tools.

This document is currently not available here.

Share

COinS
 
Jul 10th, 1:15 PM

Examining Exams: What Makes For A Good Question?

PAB 106

Perhaps the most important task confronting educators is the accurate assessment of student learning. All efforts at reform or innovation must ultimately address the question “Has student learning improved?”. In addition, the movement towards individualized learning must be able to answer the question “Has THIS student’s learning improved?” These questions will require that as educators, we develop the tools and the expertise to critically evaluate our assessment instruments. Large first-year science classes permit the application of learning measurement theories that are at the leading edge of cognitive and educational science. At the University of Guelph, first year Chemistry has approximately 2400 students who complete several multiple choice exams throughout the year in principally two courses. When a student’s grade is calculated at the end of the year, what should that grade mean? A careful study of exam questions using modern learning measurement theories begins to inform the process of exam creation and interpretation. Classical Test Theory (CTT) is commonly used to analyze the performance of both students and exam questions, but this older theory has several well-known weaknesses. Item Response Theory (IRT) is better suited to effectively characterizing the performance of both exam questions and students. I will present data on the analysis of exam questions used at Guelph over the past several years to understand the features that lead to better measures of student learning. Future pedagogical changes will be more readily accessed with these measurement tools.