Medical students are typically asked to complete course evaluations, but little is known about how students decide to rate courses. This study sought to examine the student feedback process by exploring the dimensionality of a course evaluation tool and examining the relationship between resulting factors and the overall rating of a course.
During the 2007-2008 academic year, all first- and second-year students were asked to provide feedback on various aspects of curricular content, delivery, and assessment for seven courses taught in the first two years of a clinical presentation curriculum. The authors examined the structure of the evaluation instrument using principal component factor analysis and used multiple linear regression to study the relationship between factors and overall course ratings.
Four stable and reliable factors were identified (assessment of students, small-group learning, basic science teaching, and teaching diagnostic approaches) that accounted for about 50% of the total variance in overall course ratings. Student assessment displayed the strongest association with overall course ratings, and for second-year students it was the only variable associated with overall course ratings.
Of the four factors, student assessment was by far the strongest predictor of overall course ratings, and this association strengthened over time. These results are consistent with the "peak-end rule" and "negativity dominance" for rating emotional experiences.