Anya Kamenetz in NPR (image LA Johnson/NPR):
Recently, a number of faculty members have been publishing research showing that the comment-card approach may not be the best way to measure the central function of higher education.
Philip Stark is the chairman of the statistics department at the University of California, Berkeley. “I've been teaching at Berkeley since 1988, and the reliance on teaching evaluations has always bothered me,” he says.
Stark is the co-author of “An Evaluation of Course Evaluations,” a new paper that explains some of the reasons why.
For one thing, there's response rate. Fewer than half of students complete these questionnaires in some classes. And, Stark says, there's sampling bias: Very happy or very unhappy students are more motivated to fill out these surveys.
Then there's the problem of averaging the results. Say one professor gets “satisfactory” across the board, while her colleague is polarizing: Perhaps he's really great with high performers and not too good with low performers. Are these two really equivalent?
Finally, there's the simple fact that faculty interactions with students and the student experience in general vary widely across disciplines and types of class. Whether they're in an an upper-division seminar, a studio or lab, or a large lecture course, students are usually asked to fill out the same survey.
Stark says his paper is unlikely to surprise most faculty members: “I think that there's general agreement that student evaluations of teaching don't mean what they claim to mean.” But, he says, “there's fear of the unknown and inertia around the current system.”
Michele Pellizzari, an economics professor at the University of Geneva in Switzerland, has a more serious claim: that course evaluations may in fact measure, and thus motivate, the opposite of good teaching.
More here.