Browse by author
Lookup NU author(s): Cameron Williams
Full text for this publication is not currently held within this repository. Alternative links are provided below where available.
© 2018, © 2018 Informa UK Limited, trading as Taylor & Francis Group. Student evaluations of teaching (SETs) have been used to evaluate higher education teaching performance for decades. Reporting SET results often involves the extraction of an average for some set of course metrics, which facilitates the comparison of teaching teams across different organisational units. Here, we draw attention to ongoing problems with the naive application of this approach. Firstly, a specific average value may arise from data that demonstrates very different patterns of student satisfaction. Furthermore, the use of distance measures (e.g. an average) for ordinal data can be contested, and finally, issues of multiplicity increasingly plague approaches using hypothesis testing. It is time to advance the methodology of the field. We demonstrate how multinomial distributions and hierarchical Bayesian methods can be used to contextualise the SET scores of a course to different organisational units and student cohorts, and then show how this approach can be used to extract sensible information about how a distribution is changing.
Author(s): Kitto K, Williams C, Alderman L
Publication type: Article
Publication status: Published
Journal: Assessment and Evaluation in Higher Education
Year: 2019
Volume: 44
Issue: 3
Pages: 338-360
Online publication date: 06/12/2018
Acceptance date: 28/06/2018
ISSN (print): 0260-2938
ISSN (electronic): 1469-297X
Publisher: Routledge
URL: https://doi.org/10.1080/02602938.2018.1506909
DOI: 10.1080/02602938.2018.1506909
Altmetrics provided by Altmetric