Proposal Title

Are we offering science students sufficient authentic assessments?

Session Type

Presentation

Room

Somerville House, room 3317

Start Date

12-7-2023 1:00 PM

End Date

12-7-2023 1:20 PM

Keywords

authentic assessment, realism, cognitive challenge, evaluative judgement, feedback, curriculum, BSc program, large classes, remote teaching, rubric

Primary Threads

Evaluation of Learning

Abstract

Authentic assessments are commonly promoted to assess higher-order thinking, encourage deep learning, and strengthen ties between classroom content and real-world problems. However, while many institutions highlight a commitment to authentic assessment in strategic documents, a clear definition of what constitutes authenticity and objective measures of the patterns and prevalence of these assessments is lacking. In a multi-year project, we compiled an inventory of all assessments across a complete BSc curriculum at a large Canadian comprehensive university and documented their authenticity to better understand student assessment experiences and facilitate discussion between instructors. Based on Villarroel's (2018) core dimensions of authenticity: realism, cognitive challenge, and evaluative judgement, we developed a rubric-style tool to score individual assessments as low, moderate, or high on each dimension. The tool has been applied to over 1000 assessments from face-to-face and remote settings, uncovering patterns in authenticity by class size, year level and assessment type. The prevalence of authentic assessment in our BSc program was low (<2%), with evaluative judgement being the weakest dimension across contexts. Small, 4th year courses were more authentic than large, early-year core courses, and assignments were consistently more authentic than tests. Curriculum-level authenticity didn’t change from face-to-face to remote settings, although nearly equal number of courses improved authenticity as decreased authenticity. This work presents a tangible tool and process that can be used to critically review individual assessments or complete curriculums and offers a representative data set for comparison. We will share practical strategies participants can consider at course, curriculum, or institutional levels to promote authenticity with an open call for collaboration.

Elements of Engagement

Basic polling features (online and/or in-room) will be used throughout to understand the range of participant's experiences. Key findings will be presented using power point slides. The Authentic Assessment Tool will be distributed as a handout, with key findings noted on the reverse (text and figure). A link to access materials electronically will also be provided. 12-15 minutes will be allotted to present, with 5-8 minutes for interaction and questions throughout. A call for collaboration will be facilitated by email sign-up with follow-up occurring after the session.

This document is currently not available here.

Share

COinS
 
Jul 12th, 1:00 PM Jul 12th, 1:20 PM

Are we offering science students sufficient authentic assessments?

Somerville House, room 3317

Authentic assessments are commonly promoted to assess higher-order thinking, encourage deep learning, and strengthen ties between classroom content and real-world problems. However, while many institutions highlight a commitment to authentic assessment in strategic documents, a clear definition of what constitutes authenticity and objective measures of the patterns and prevalence of these assessments is lacking. In a multi-year project, we compiled an inventory of all assessments across a complete BSc curriculum at a large Canadian comprehensive university and documented their authenticity to better understand student assessment experiences and facilitate discussion between instructors. Based on Villarroel's (2018) core dimensions of authenticity: realism, cognitive challenge, and evaluative judgement, we developed a rubric-style tool to score individual assessments as low, moderate, or high on each dimension. The tool has been applied to over 1000 assessments from face-to-face and remote settings, uncovering patterns in authenticity by class size, year level and assessment type. The prevalence of authentic assessment in our BSc program was low (<2%), with evaluative judgement being the weakest dimension across contexts. Small, 4th year courses were more authentic than large, early-year core courses, and assignments were consistently more authentic than tests. Curriculum-level authenticity didn’t change from face-to-face to remote settings, although nearly equal number of courses improved authenticity as decreased authenticity. This work presents a tangible tool and process that can be used to critically review individual assessments or complete curriculums and offers a representative data set for comparison. We will share practical strategies participants can consider at course, curriculum, or institutional levels to promote authenticity with an open call for collaboration.