Student Evaluations

The focus on Moneylaw is usually scholarship and for good reason. At least with scholarship there are things to count whether downloads, citations, or number of lines on a resume. There is little discussion of teaching although, but for the teaching function,we would not have jobs. The lack of discussion I think can be traced to the absence of any reliable way to measure teaching effectiveness. It is a credit to (although not much of one) contributors or commentators to Moneylaw that no one has suggested that student evaluations are measures of teaching effectiveness. It is common knowledge that high evaluations can the be result of excellent teaching but can also be the result of easy teaching, funny teaching, weenie roasts and, I suspect, some other things that have not occurred to me.

Could this issue be addressed? I have not kept up with the literature but I recall efforts in undergraduate math courses to determine the relationship between student evaluations and teaching effectiveness. The teachers all used the same book and gave the same exam. In one study, after holding constant for as many variables as possible, there was a negative correlation between exam scores and the students’ evaluations of the teachers. When I taught economics we did the same thing in a principles course. We taught the same chapters and gave a common exam. In our case there was no correlation although there was a great range in how the students ranked the teachers.

Suppose a school had 4 contracts sections. The professors could agree on the same book and the same coverage and devise a common exam which they also all agreed to grade. (The grade for the experiment could be the average grade from the 4 professors.) With a total of 400 students, you would have 400 evaluations of the teachers and 400 final exam scores. LSAT, GPA, etc., could all be factor out so the focus would be on how much the students “learned” and what the students thought of the professor.

This would be terribly time consuming and there may be some statistical issues to iron out. Plus, I am not convinced that the exam – as opposed to what happens five years from now – is a great indicator of teaching effectiveness. Still, isn’t it about time that someone in our profession took a close look at student evaluations to determine if they tell us anything useful and, perhaps, to determine whether they are actually a disincentive to teaching effectively.