Teach For America, its coffers fattened with $50 million in federal i3 scale-up grant money, embarked upon a major expansion effort in 2010. It aimed to place 13,500 first- and second-year teachers in fifty-two regions across the country by the 2014–2015 school year—an ambitious 80 percent expansion of its teaching corps in just four years. As part of the deal, TFA contracted with Mathematica Policy Research to evaluate the expansion.
A handful of previous studies have found that TFA teachers have been more effective than conventionally trained and hired teachers in math and about the same in reading. The big question was whether putting its growth on steroids would compromise TFA’s recruitment and selection standards or overall effectiveness.
Mathematica found little reason to be concerned about TFA losing a step. The elementary school teachers recruited in the first and second years of the i3 scale-up were “as effective as other teachers in the same high-poverty schools in teaching both reading and math.” Corps members in lower elementary grades “had a positive, statistically significant effect on student reading achievement,” but no measurable impacts for other subgroups of TFA teachers were found. Of interest (mostly to TFA itself), the study found “some evidence that corps members’ satisfaction with the program declined”—perhaps a hint of growing pains.
Love ‘em or hate ‘em, Teach For America remains the closest thing education reform has to a household brand. Matt DiCarlo of the Shanker Institute summarized reaction to the findings well, calling it “one of those half disturbing, half amusing instances in which the results seemed to confirm the pre-existing beliefs of the beholder.” Just so. If you’re one of TFA’s legion of detractors—who already accuse the organization of being full of smartypantses with fancy degrees from prestigious colleges and a few weeks of training—you can say that the TFAers offered no improvement over comparison teachers with an average of fourteen years experience. If you’re a fan of TFA, your response is to ask where the value lies in education school, traditional certification, and all that experience if it can be matched by smartypants TFAers with fancy degrees from prestigious colleges and a few weeks of training. Another possibility, beyond the scope of Mathematica’s work and too depressing to consider, is that the outcomes in high-poverty schools are so poor that it doesn’t take that much to get up to speed and produce the same sad, desultory results.
Mathematica’s researchers came neither to bury nor praise TFA. The study "provides a snapshot of TFA’s effectiveness at the elementary school level in the second year of the i3 scale- up,” they conclude. TFA’s effectiveness “could either increase or decrease as the program expands further and adapts to its new, larger scale.” Opinions about the program, meanwhile, are unlikely to change.
SOURCE: Melissa A. Clark et al., “Impacts of the Teach For America Investing in Innovation Scale-Up,” Mathematica Policy Research (March 2015).