Citing insurmountable data challenges, the authors of Great Schools’ most recent evaluation of the School Improvement Grant Program argue that policymakers are left “without a clear and unambiguous picture of whether this major investment in turning around the nation’s lowest-performing schools worked as intended.” The view may be opaque, but what we can see isn’t pretty.
According to the report, between the 2009–2010 and 2012–2013 school years, SIG grantees at the elementary and middle school levels saw a cumulative increase in proficiency of only a few percentage points in most grades and subjects relative to comparison groups—a disappointing result, considering some schools saw funding increases of as much as 58 percent per student under the program. And while SIG’s restart and closure models were used so infrequently that little can be said about their effectiveness, the report indicates that there were no statistically significant differences between the rates of improvement at transformation and turnaround schools, a finding that suggests that it doesn’t much matter which one-size-fits-all improvement models the federal government prescribes—implementation is what counts.
Unfortunately, SIG’s implementation was deeply troubled, as the authors of the report document through approximately fifty interviews with superintendents, program directors, principals, and teachers. Unsurprisingly, SIG grantees experienced difficulties with “the removal and recruitment of staff, community and union resistance to school changes or closures, the ability to secure and retain sufficient resources to launch and sustain the turnaround efforts, and conflicting demands from various stakeholders.” Additionally, governance was (as always!) a barrier to top-down reform, with incoherent state and district initiatives causing confusion and frustration that were compounded by the rushed timeline and abrupt increase and decrease in programmatic funding.
So, is the idea of school turnaround dead and buried? Not quite. Unmentioned in the report is the (criminally underreported) fact that, because state data systems were still being built during the planning phase of SIG, thirty-nine states used student achievement, rather than student growth, to identify eligible schools; this means that (much as with NCLB) the schools identified as “consistently low-performing” by SIG may not have been those most in need of reform. Moreover, this report was not able to look at student growth under SIG, so there might have been more progress than proficiency rates alone can indicate.
A more interesting report might have compared the results from these thirty-nine states to the eleven states that used student growth in the identification process. Until that report is written, however, defenders of “school improvement” will still have a (rather shaky) leg upon which to stand.
SOURCE: “School Improvement Grants: Progress Report from America’s Great City Schools,” Council of Great City School (February 2015).