Kennesaw State University

Survey

Student Feedback on Teaching: Why Mean Ratings May Not Tell the Full Story

Dec 16, 2016 | by Thomas Pusateri | Kennesaw State University

Unfortunately, we don’t live in Lake Wobegon where all teachers receive above average ratings. Faculty who receive mean ratings below 3.50 are not necessarily perceived by students as ineffective teachers.

Since 2010, KSU has been collecting student feedback on teaching using an online system. This system has saved considerably on paper and staff processing time compared to the paper forms used in the past. However, I am aware that faculty have expressed some concerns about the appropriate interpretation and use of student feedback. I would like to address one of these concerns, which is the interpretation and use of mean (average) ratings on the items on the form.

Let’s first look at the pattern of data for all courses taught at KSU from a two-year period[1] during which KSU emailed invitations for students to provide feedback on over 20,000 courses, which is the equivalent of over 500,000 ratings forms. During this period, students completed over 175,000 of these forms for an overall response rate of 35%. 

All feedback forms included five common items, one of which was phrased in the following way: “The instructor was effective in helping me learn.” Students responded to this item on a 4-point scale:  Strongly Agree (4 points), Agree (3 points), Disagree (2 points), and Strongly Disagree (1 point); students could also indicate “No response” or leave the item blank. On this item, the mean (average) rating was 3.50 (halfway between Strongly Agree and Agree) and the distribution of responses was the following:

Strongly Agree:  64%

Agree:  26%

Disagree:  6%

Strongly Disagree: 4%

No Response:  2%

(Note:  Percentages have been rounded. The sum of percentages may not total exactly 100%.)

These data indicate that 90% of students Agreed or Strongly Agreed that their instructor was effective in helping them learn. This should allay the fears of faculty who are concerned that students will only provide feedback online if they are unhappy with the course.  In addition, there was no systematic difference in the mean ratings of courses with low (<=35%) or high (>=50%) response rates[2]. For additional research that challenges common misperceptions of student ratings, I encourage you to read Benton and Ryalls (2016)[3].

Unfortunately, we don’t live in Lake Wobegon where all teachers receive above average ratings. Faculty who receive mean ratings below 3.50 are not necessarily perceived by students as ineffective teachers. To illustrate this point, let’s look at how students responded to the same item (“The instructor is effective in helping me learn.”) in three sections of the same 1000-level general education course, each section of which had large enrollments.   

Professor A:  46 of 131 students responded (35% response rate)

Mean (average) = 3.50

Strongly Agree:  26 students (57%)

Agree:  17 students (37%)

Disagree:  3 students (7%)

Strongly Disagree 0 students (0%)

No response:  0 students (0%)

Note that Professor A received an average rating of 3.50, which is the average rating obtained by all faculty during the review period. Professor A also received a majority of Strongly Agree responses, and 94% of students Agreed or Strongly Agreed that this professor was an effective teacher. There were still a few students (7%) who gave the professor a rating of Disagree. 

Let’s now look at the patterns of responses for two other professors who taught the same course, each of whom received ratings of 3.20.

Professor B:  16 of 83 students responded (19% response rate)

Mean (average) = 3.20

Strongly Agree:  4 students (27%)

Agree:  10 students (67%)

Disagree:  1 student (7%)

Strongly Disagree 0 students (0%)

No response:  0 students (0%)

If we look only at the mean (average) rating of Professor A and Professor B, we may agree that Professor A is perceived by students as a “more effective” teacher, but is Professor B a “poor” teacher? I don’t think so. Notice that Professor B received the same percentage of Agree and Strongly Agree ratings (94%) as Professor A; it just happened that Professor A received a larger percentage of Strongly Agree responses than Professor B. Professor B is perceived by the overwhelming majority of students as an effective teacher, but there is certainly room for Professor B to develop over time into an even more effective teacher. And Professor A may have room to improve as well. 

Here's the pattern of responses for Professor C:

Professor C:  53 of 124 students responded (43% response rate)

Mean (average) = 3.20

Strongly Agree:  23 students (44%)

Agree:  19 students (37%)

Disagree:  3 students (6%)

Strongly Disagree 5 students (10%)

No response:  2 students (4%)

Notice that both Professors B and C received an overall rating of 3.20, but the patterns of responses that contributed to those averages is noticeably different. The plurality of students (44%) Strongly Agreed that Professor C was an effective teacher, but there was a substantial minority (16%) who Disagreed or Strongly Disagreed with this statement, and a lower percentage (81%) of students Agreed or Strongly Agreed that Professor C was an effective teacher compared to Professor B.  

So, who’s the more effective teacher in this course, Professor B or Professor C? In my opinion, that’s not the way I would phrase the question. I perceive both teachers as “effective” but both teachers have the potential to be even more effective. My question for both professors would be the following: “How might I use student feedback from this course to become an even more effective teacher?” 

These professors might benefit from examining a report that all faculty can generate by following the instructions starting on Page 11 of the manual available at the following link:  http://digitalmeasures.kennesaw.edu/course-response/track-responses.php.  By following these instructions, the professors will receive a report in the form of an Excel worksheet where they can view the unique (but still anonymous) pattern of each student’s responses, one student per row in the worksheet. 

This report can be particularly useful for examining patterns of student comments. For example, Professor C could sort the data in this worksheet to compare the comments of those students who Agreed/Strongly Agreed to those students who Disagreed/Strongly Disagreed with the “instructor effectiveness” item. This might help Professor C consider strategies for addressing specific concerns expressed by students who gave low ratings. 

Professors A and B may also benefit from viewing the information in this report. For example, a professor might consider a comment like “The professor is disorganized” differently if the student who made the comment rated the professor Strongly Agree (as opposed to Strongly Disagree) on the “instructor effectiveness” item. For additional suggestions on how to interpret student comments, I highly recommend the online resources available from Syracuse University’s Office of Institutional Research & Assessment[4]

Of course, the feedback from only one course can tell only a part of the story. We would get a more complete picture of each professor if we examined data across classes and semesters to determine if the pattern of responses is consistent across time, across classes, and across courses taught by the professor. However, I hope that these examples are persuasive to demonstrate that mean ratings might not tell the full story. Examining the patterns of responses may provide more diagnostic information about a teacher’s effectiveness. 

If you are uncertain how to generate or interpret the report described above, feel free to contact me for a confidential consultation.

[1] Spring 2012 through Spring 2014.  For more information, read the report available at http://digitalmeasures.kennesaw.edu/course-response/reports.php

[2] For more information about this finding, refer to the report in Footnote 1.

[3] Benton, S. L., & Ryalls, K. R. (2016). Challenging misconceptions about student ratings of instruction.  Idea Paper #58. Manhattan, KS:  Kansas State University, Center for Faculty Evaluation and Development.  Retrieved from:  http://www.ideaedu.org/Portals/0/Uploads/Documents/IDEA%20Papers/IDEA%20Papers/PaperIDEA_58.pdf

[4] Interpreting and Using Student Ratings of Teaching Effectiveness: http://oira.syr.edu/wp-content/uploads/2014/10/Interpret.pdf and Student Ratings of Teaching Effectiveness: Creating an Action Planhttp://oira.syr.edu/wp-content/uploads/2014/12/Action.pdf