Using IDEA: Reflections after five years

May 7, 2015 / By: By Dr. Ann Johnson, Director of the Center for Faculty Development

By Dr. Ann Johnson, Director of the Center for Faculty Development

Ann Johnson Headshot From Director SynergiaWe've been using the IDEA system for managing the student feedback process now for five years, and the Committee on Teaching Evaluation is analyzing a recent faculty survey gathering your impressions. I've been keeping an eye on expert recommendations regarding the use of student feedback and have pulled together some observations and advice, hoping it might help inform ongoing conversations about uses and misuses of IDEA.

What’s in a name?

Specialists in the field of Faculty Development are not fully aligned on what systematic student feedback should be called – Student Evaluation of Teaching? Student Satisfaction Ratings? Student Ratings of Instruction? I prefer the latter and use SRI as shorthand, following the practice of our IDEA colleagues and others. Why? Students are not in a position to evaluate teaching, but they can provide ratings based on their experiences in the classroom

The importance of keeping a focus on student learning

The faculty selected IDEA five years ago because, in part, of its emphasis on making student learning front and center when evaluating success in the classroom. Student perception of progress toward learning goals makes up 50% of the overall score for each course. However, emphasis on learning goals is not uniform across our 7 schools and colleges. Some require faculty to report the overall score (or some version of it) while others de-emphasize learning goals in favor of the “excellent course” and “excellent teacher” ratings. This sends a mixed message to faculty and ignores the strongest feature of the IDEA system – its emphasis on helping faculty identify and strengthen student learning.

Comparing faculty

Respected scholar Nira Hativa, who has conducted and synthesized research on SRIs, points out mean ratings vary substantially across disciplines, so disciplinary comparisons should not be made when using SRIs for evaluative purposes (Hativa, 2013, pp. 79-81). This issue won’t arise in our smaller schools/colleges (e.g. Social Work, Engineering), but may be relevant for the larger units.

How much should SRIs be weighted?

Our Faculty Handbook specifies that IDEA data should count for 30 to 50% of the overall teaching evaluation, with individual units deciding on the weighting within those parameters. Most units use 50% or something close to that. Is this reasonable? If you have non-IDEA data that is strong and reliable (experts recommend a portfolio of teaching materials as well as peer observation data), you can reduce reliance on SRI data to something closer to 30%. Within Faculty Development, we are currently working on strategies for strengthening peer evaluation of teaching practices. Regarding teaching materials, several units allow it, but its role in overall evaluation is often vague. For faculty unhappy with the weight given to SRI data, strengthening focus on these 2 other avenues for evaluation is the way to go.

Adjusted scores

I hear regularly from faculty who are unhappy about having their ratings adjusted in a downward direction. IDEA uses student motivation, work habits, and size of class to make adjustments; if you have a large class with unmotivated students who also report poor work habits, your scores will likely be adjusted in an upward direction but the reverse conditions will send your ratings downward. It’s an attempt to “level the playing field” and benefits those instructors teaching difficult-to-reach students. However, if highly motivated students also report strong learning gains, it means that the instructor is doing well and should be allowed to use the unadjusted “raw” score. IDEA recommends “using the unadjusted score if the average progress rating is high (for example, 4.2 or higher).” As they point out: “instructors should not be penalized for having success with a class of highly motivated students with good work habits” (from the IDEA report: “Using IDEA Results for Administrative Decision-Making”).

Within UST, some units allow faculty to choose whether they report raw or adjusted scores while others require the adjusted scores. If you are required to report the adjusted score, you can contextualize it in your year-end narrative.

Another implication for faculty: if you have a class with unmotivated students reporting poor work habits, yet these students actually report strong learning gains, then you are a rock star and should definitely highlight this state of affairs in your year-end teaching narrative. Motivation and work habits are reported at the bottom of page 2 of your feedback form.  

SRIs and experimentation

An idea that circulates among Faculty Development professionals from time to time is the practice of allowing an instructor to designate a course as “experimental” and be allowed to withhold SRI data for that one course for reporting purposes. Faculty (particularly pre-tenure faculty) often describe anxiety about trying out something new in a course (a service learning project or innovative pedagogy) -- fear that it won’t go well and SRI ratings will be poor. SRIs should not be a disincentive for innovation in the classroom. Right now faculty are required to report student ratings for every class taught, but adjusting this requirement might be an option to consider if standards for IDEA implementation come under review.

SRIs and student bias

Some faculty have concerns about the possibility of gender and race/ethnicity bias producing disadvantages for women faculty and faculty of color. Research on gender bias in SRI data is complex, often reporting interaction effects, but also confirming that women can be at a disadvantage particularly if they are a minority in the discipline. Research on race/ethnicity bias is more recent and some studies confirm bias, especially (again) in those disciplines in which faculty of color are not abundant. Some experts discount these recent studies because they don’t (yet) report large sample sizes or replication. More research is needed, but available evidence suggests that we ought to take the possibility seriously, continue to track research and listen closely to faculty experience in this area.

The IDEA forms are not designed to detect bias, but if you have concerns, use available evidence to contextualize your scores; student comments sometimes (unfortunately) reveal bias in ways more obvious than ratings.  The possibility of bias deserves close attention, and chairs and deans should be aware of the subtle ways that race and gender bias can shape student perception of their classroom experience.

General implications for faculty

As you prepare your year-end teaching reflection/narrative, take control of the interpretation of your IDEA data. Learn how to interpret it (we offer workshops every year). Use it to inform your own self-evaluation and plans for the following semester, and to contextualize your teaching performance for those evaluating it. Finally, as faculty governance groups consider the role of IDEA, attention to appropriate weighting of data, sensitivity to bias, and focus on rewarding innovation will ensure a system that really works for faculty and promotes excellence for all.