There’s chronic and growing disenchantment with the quality of university-based teacher education schools and their ability to adequately prepare the nation’s teachers. The discord reached new heights with Arthur Levine’s groundbreaking 2006 report that found that “current teacher education programs are largely ill-equipped to prepare current and future teachers for new realities.” Five years later, Cory Koedel conducted an eye-opening study on grade inflation, which found that “students who take education classes at universities receive significantly higher grades than students who take classes in every other academic discipline.”
Enough, said The National Council on Teacher Quality, which decided about the same time that it would cast much-needed light on the caliber of teacher training in American universities. After a comprehensive analysis of over 1,000 programs, including in-depth reviews of university syllabi and other programmatic materials, it issued for the first time in 2013 a highly visible and contentious report that ranked teacher education programs, known as the Teacher Prep Review.
This new CALDER study conducted by Dan Goldhaber and the aforementioned Cory Koedel examines whether teacher education programs were responsive to these publicly-released evaluation ratings. Specifically, would they respond to an “information experiment” designed to change their practices, which would, in turn, increase their public rating? In particular, the study first investigates whether teacher-ed programs changed in response to their ratings, and second, if given a customized “nudge” that explained how they could increase their particular program’s rating in the future, whether they would actually do it.
The study focuses on elementary education programs with published ratings in 2013 through 2016. On the descriptive front, they find that program ratings appear to be linked to program characteristics. For example, private institutions tend to be rated lower and institutions with higher tuition and entrance exam scores tend to be rated higher. They also find that over the three years, the ratings improved overall for 26 percent of programs, declined for 14 percent, and stayed the same for 61 percent.
Now for the experiment they conducted. They assigned each program to a specific recommendation that would boost their rating in particular. For instance, researchers recommended that programs do such things as raise their grade point average for program admittance to 3.0, or observe and provide written feedback to student teachers at least five times—both of which are positively rated in the Teacher Prep metric and would boost scores. These recommendations were intended to be “low hanging fruit” that were do-able in fairly short order, as opposed for example, to revamping their academic curriculum for their program which might take longer. Analysts randomly selected half of the programs within each recommendation group (i.e., those receiving the same recommendation) to the treatment condition, whereby the program administrator and university president received a customized letter via email from the analysts explaining the recommendation and how it would improve their rating. These emailed letters were sent the last week of July 2013, close to when the inaugural program ratings would appear in U.S. News & World Report.
The key finding was that treated programs actually had slightly lower ratings from 2013 to 2016 than those in the control group. The decrease was 0.13–0.15 rating points, which is about 22 percent of a standard deviation. Analysts try to figure out this head-scratching result and hypothesize that perhaps their recommendations weren’t that feasible after all since raising the GPA could obviously mean losing students, especially since just 9.4 percent of undergraduate programs had a 3.0 minimum GPA as of 2013. They also discuss the hostility toward the ratings from the larger teacher education community and posit that their extra “touch” may have inflamed existing animosity. Finally, they suggest that perhaps their experiment was initiated too early since prior research has shown that “nudge interventions” are quite sensitive to timing. Regardless, it is a curious finding since the broader literature shows that post-secondary institutions are indeed quite responsive to public rankings such as the annual college rankings by Barrons and U.S. News & World Report.
But let’s not forget the bright silver lining here based on the descriptive part of the analysis: About a quarter of the programs improved their ratings after the report was released. Cue NCTQ President Kate Walsh: “While we cannot definitively assert that we caused these improvements, we think it is highly likely that the Teacher Prep Review played a substantial role in moving the ball yards—not inches—toward the goal.”
We couldn’t agree more.
SOURCE: Dan Goldhaber and Cory Koedel, “Public Accountability and Nudges: The Effect of an Information Intervention on the Responsiveness of Teacher Education Programs to External Ratings,” Calder (March 2018).