Anyone who has ever been a college student is probably familiar with the once-a-semester ritual of filling out student evaluations for the professors teaching their classes. As a student, I was typical in not taking them very seriously. When I did make an effort to do more than merely fill in the right bubble on the scan form, it was either because I had a beef with the professor or really, really, enjoyed the class.
As a college professor receiving student evaluations, I’ve discovered that most students behave similarly. One college I taught at switched from a paper and pencil evaluation form filled out during class to an online system where the students were requested to evaluate their courses by logging onto a web site and completing them outside of class. Not only did the number of students who evaluated their courses drop significantly, the results of the evaluations got skewed towards both extreme ends. Students who really liked or hated the class were likely to fill them out, but not many others.
So it comes to me as little surprise that student evaluations turn out to not be very effective methods of evaluating the quality of education the students are receiving. A study done by Scott E. Carrell of UC Davis and James E. West of the USAF Academy entitled, “Does Professor Quality Matter? Evidence from Random Assignment of Students to Professors” looks at this very thing.
The researchers looked at students and professors at the Air Force Academy involved with randomly assigned courses, such as calculus. They looked at the student ratings of the professors, compared with the grades earned for that course, but then also followed those students in subsequent follow-up courses.
Results show that there are statistically significant and sizable differences in student achievement across introductory course professors in both contemporaneous and follow-on course achievement. However, our results indicate that professors who excel at promoting contemporaneous student achievement, on average, harm the subsequent performance of their students in more advanced classes. Academic rank, teaching experience, and terminal degree status of professors are negatively correlated with contemporaneous value added, but positively correlated with follow-on course value-added. Hence, students of less experienced instructors who do not possess a Ph.D. perform significantly better in the contemporaneous course, but perform worse in the follow-on related curriculum.
In other words, there was a significant correlation with student’s who rated their professors very highly with difficulty in follow-up courses. The professors who better prepared their students for future challenges were rated more poorly. This certainly raises the question of whether or not student evaluations can be considered honest assessments for judging the effectiveness of the quality of teaching.
This really comes a little surprise to me. We tend to enjoy studying things that we do well in, and we’re inclined to be more positive about something that’s fun. Savvy teachers utilize this by trying to make their lessons fun. Still, there’s a line that can easily get crossed. Instead of helping the students enjoy the materials through achieving their academic goals, sometimes us teachers end up simply dumbing things down and lowering our standards to make it more enjoyable for the students. Sure, the students will feel the enjoyment of completing academic goals, but the target has been moved so close that they aren’t prepared for the next class (or the real word) that doesn’t provide such a gentle safety net for failure.