Senate Document Number†† 0310F


Date of Senate Approval†††† 09/09/10


- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Statement of Faculty Senate Action:


Sense of the Senate Resolution


The Faculty Senate adopts FWDCís response to the Senate mandate regarding electronic student rating of instruction.




FWDC2:†† Response to the Faculty Senate mandate regarding electronic student rating of instruction




In the March 2010 Faculty Senate meeting, the FWDC was charged to address these issues regarding Student Rating of Instruction:


A.The percentage that defines a low response rate.

B.†† The procedure that an instructor uses to choose the courses that are to be evaluated.

C.†† An incentive system that encourages students to fill out the electronic evaluations.


FWDC will present this at the first Senate meeting in the 2010/11 academic year.




A.Because we are implementing a new technology for the student rating of faculty, it is difficult to determine a minimum or low response rate.The majority of available data suggests that there will be an initial, significant decline in response percentage.With paper surveys, the average response rate for UNC-Asheville classes was 83.8% in the 2008 Ė 09 academic year.There is evidence to suggest that we may initially see that response rate cut in half. According to the April 7, 2010 issue of The Boston Globe, for example, Northeastern University experienced a steep decline in response rate in the first year of electronic SRI implementation, falling from 80% to 54 %.†† However, there is also evidence to suggest that the percentage will rise over time and likely reach (or even eclipse) the paper rate.Harvard University, for example, began using an electronic rating instrument in 2005 has, after providing the incentive of students receiving their grades early, has seen their response rate rise to 96%, as opposed to the 65% rate earlier in the process.Given the evidence available, the FWDC recommends considering a range of 40% to 60% response rate as low and, in addition, notes that department chairs and program directors, in consultation with their faculty, should have the final say in determining a class evaluationís validity with respect to response rate.†† Such consultation is especially vital in the case of adjuncts, lecturers and untenured assistant professors, whose retention depends significantly on their teaching effectiveness.It is also worth noting that other studies have shown that though response rate percentages may initially decline, the deviation from the average scores an instructor received via paper evaluation were statistically insignificant, with one study noting a standard deviation of only .09 on a scale of 1 Ė 5.


B.The FWDC notes that the default setting is that all courses of the instructor will be listed for evaluation, and Chairs should actively select courses for evaluation, per our current practice.However the FWDC suggests that the change in format (paper to electronic) does not call for a change in policy.


C.Research has shown that incentivizing electronic surveys can substantially boost response rates.†† The FWDC suggests that any or all of the following be employed:


         Allow at least a two-week window of time for the evaluations to be completed.

           Send reminder emails to students who have yet to complete the survey.

         Students who have completed the course evaluation will have immediate access to their grades on OnePort once they have been posted.

         Encourage faculty to take class time in order to explain both the change from paper to paperless evaluation and the importance of evaluations in general.

         In the case of junior faculty, or those up for PTR or promotion, it may be advisable to book space in a computer lab or computer integrated classroom to facilitate a greater response rate.