6.7
Advisory Committee Policy Regarding Teaching Evaluations 2019-2020

[Approved by Academic Council May 22, 2019]

Advisory has discussed in detail how to consider the new-form teaching evaluations in tenure/promotion cases in a consistent manner. We can provide an interim guideline for summarizing these evaluation data and for mentoring junior faculty.

As always, Advisory reads every evaluation, with special attention to the student comments. We look for patterns in these comments; a single negative evaluation or a small cluster of negative evaluations within a particular course offering often receive little attention.

With regard to numerical data, it is not possible to directly compare the old and new forms, because there is no direct correspondence between the old 4 categories and the new 9-point scale for responses. The new forms just spread out the answers along a continuum, so they can show the distribution in a more nuanced way. Here is how Advisory has decided to evaluate the numbers. First, we limit our attention primarily to 2 questions:

3. Please rate the overall quality of the course
4. Please rate the overall quality of the teaching

Second, we look at the numerical data for these 2 questions in the following ways:

a. the total proportion of student responses that are ≥ 6 (i.e., 6, 7, 8 or 9)
We consider that this number conveys excellence in teaching in a manner similar to the old metric of "total percentage in the top 2 categories (good and outstanding)”.

b. We also consider the total proportion of student responses that are ≥ 7 (i.e., 7, 8 or 9).
This number conveys similar information to the old percentage outstanding. We do this in order to distinguish exceptionally strong teaching. We do not consider it useful to distinguish further, for instance 8 and above or 9, because our impression from close reading of the evaluations is that individual students may interpret the 9-point scale somewhat differently, so we are reluctant to over-interpret finer gradations.

It is important to note that there is no universal threshold used by Advisory for the satisfactory proportion of responses described by (a) and (b). Advisory is well aware that student ratings for courses may vary depending on class size, division, and whether a course is required or elective, and numerical data are considered in context. Instead of testing a faculty member's summary ratings against a rigid threshold, Advisory examines patterns of student response as articulated in individual comments, and assesses trends in the numerical results (for instance, by comparing a faculty member's later vs earlier offerings of a course, or a candidate's overall ratings for Quality of Teaching before and after reappointment or tenure).

For faculty who wish to know how their evaluations compare to others in their department and/or division, they may ask their Chair to request the departmental and divisional statistics (means and medians) for Quality of Teaching and Quality of Courses from the Office of Institutional Research.