Some Resources on Student Evaluation of Teaching
Note: I inquired on two email lists -- the POD [Professional & Organization Development Network in Higher Education] list, and on STLHE-L, the list of the Society for Teaching and Learning in Higher Education -- for suggestions of "an article or two which would offer a good overview of what's currently known about student evaluations." The following items were suggested; in each case I've identified the suggester, and added their comments if they made any. I've also added some items I found in a quick search on Academic Search Elite. I invite anyone to comment further on any of these -- or suggest resources I've missed -- by sending me an email at hunt@stu.ca; I'll add them, or new comments, to those already there.
You might want to bookmark this and check back; I'm hoping that it will continue to be a work in progress, and profit from continuing advice and counsel from both those marvellous electronic communities.
Last updated 8 December 2002.
For quick review, the most recent collections are the New Directions issues . . . " -- Mike Theall <mtheall@ysu.edu>, on POD
For quick review, the most recent collections are the New Directions issues . . . " -- Mike Theall <mtheall@ysu.edu>, on POD
The most current book to hand a colleague. It provides both background and tested, usable process for
all aspects of evaluation. -- Mike Theall <mtheall@ysu.edu>, on POD
-- David Dunne, STLHE-L
-- Elaine Blakemore, STLHE-L
-- Elaine Blakemore, STLHE-L
-- Elaine Blakemore, STLHE-L
-- Elaine Blakemore, STLHE-L
-- Elaine Blakemore, STLHE-L
-- Elaine Blakemore, STLHE-L
I like the overview of Damron, who scanned the literature in the last 15 years as well and who published his (updated) opinion in the internet -- Jon Radue, STLHE-L
This is an excellent and easy read. -- Erhan Erkut, Ph.D. [on STLHE-L]
. . . a thorough survey of the literature (as of 1987) -- Erhan Erkut, Ph.D. [on STLHE-L]
"The faculty rating experiment at the University of Washington." -- Instructional Development Centre. Queens University
-- David Jacques, on STLHE-L
-- David Jacques, on STLHE-L
-- David Jacques, on STLHE-L
Feldman (this guy does such wonderful work!) also teased apart the meaning of these global satisfaction ratings . . . Presuming that publication of peer-reviewed articles is an indication that you know your subject, Feldman found that "Teacher's knowledge of subject" was the 9th most important instructional dimension in both student achievement and in ratings of satisfaction. This doesn't mean that one can sweep the street corner for warm bodies -- whether those of not-quite-dead-white males -- or those who may qualify under affirmative action -- and get good results in a classroom teaching something they know nothing about. This is a result wherein apparently about all people in the data base presumably had some reasonable mastery of their subjects.-- Ed Nuhfer, on POD [This comment on Feldman's work was brought to my attention by Mike Chejlava, also on POD]
This article compared the student scores on a standardized final among course taught by several faculty members over several years. The findings were that even though the student evaluations of the faculty varied and the size of the lecture sections varied, neither of these factors had any significant effect upon the student scores on the standardized test. The test used was the American Chemical Society test for General Chemistry, which while not perfect in testing learning, is the best that we have in the field. -- Michael Chejlava
Reports a study of what factors of teaching that students feel are the most important. At the end of
the conclusion he wrote: " It is also significant that many of the higher rated items [by the students] tend to be those which lead to the accumulation of facts and that the lowest rated ones (items z, bb, and aa) are those which approximate the problem solving situations which one finds upon entering the "real world." When we ask our students what helps them learn best, are we actually asking them what helps them memorize, not think?" -- Michael Chejlava
Abstract: Presents a brief review of research on student written evaluations of the teaching performance of college and university instructors. Historical background; Arguments against the use of student evaluations as a
valid indicator of teaching effectiveness; Discussion of student and faculty reaction to the use of student ratings. -- Academic Search Elite [full text on line; many references]
Abstract: Responds to the comments by Richard Redding, James Friedrich, Dave Buck and J. Scott Armstrong on student ratings. Shared form of argument of the comments; Analyses of the four premises and conclusion of the reasoning underlying the argument; Belief of Armstrong and Buck on the failure of student ratings as measures of instructional quality; Problem with objective achievement measures; Lack of items that assess students' study
behavior. -- Academic Search Elite
Abstract: Presents a reaction to the article `Validity Concerns and Usefulness of Student Ratings of Instruction,' by Anthony G. Greenwald which appeared in the November 1997 issue of the `American Psychologist.' Alleged failure of the article to provide direct evidence on the usefulness of students ratings; Discussion on the question of the relation of teacher ratings to learning; Conclusion on the role of teacher ratings on teachers' interest in helping people learn. -- Academic Search Elite
Abstract: This paper examines the validity of student evaluation of teaching (SET) in universities. Recent research demonstrates that evaluations can be influenced by factors other than teaching ability such as student characteristics and the physical environment. In this study, it was predicted that students' perception of the lecturer would significantly predict teaching effectiveness ratings. Using an 11-item student rating scale (N = 199), a two-factor confirmatory factor model of teaching effectiveness was specified and estimated using LISREL8; the factors were 'lecturer ability' and 'module attributes'. This initial model was extended to include a factor relating to the students' ratings of the lecturer's charisma. The model was an acceptable description of the data. The charisma factor explained 69% and 37% of the variation in the 'lecturer ability and 'module attributes' factors respectively. These findings suggest that student ratings do not wholly reflect actual teaching effectiveness. It is argued that a central trait exists which influences a student's evaluation of the lecturer. -- Academic Search Elite; abstract from author
Abstract: Presents information on a study which examined the reliability of teacher education students'
evaluations of faculty teaching effectiveness. Methodology; Results of the study; Discussion.-- Academic Search Elite; Full text on line
Abstract: The literature abounds with psychometric studies of course evaluation measures and articles
debating the merits of student ratings of instruction, but little research has focused on faculty perceptions of this procedure. In the present study faculty perceptions are explored at a teachers' college where evaluation is carried out annually on a sample of courses. The sample includes 101 instructors who completed the research questionnaire. Faculty attitudes reflected a broad range of responses towards validity of student ratings, and their usefulness for improving instruction. Although overall attitudes were mildly positive, few instructors reported changing instruction as a result of student ratings. Moreover, few supported sending evaluation results directly to college administrators or publishing them for student consumption.-- Academic Search Elite; abstract from author
Includes many useful references.