Sixteen states and the District of Columbia have already submitted plans to the U.S. Department of Education to meet their obligations under the Every Student Succeeds Act, and the remaining thirty-four will do likewise in September. These publicly available documents describe, among other things, how the state intends to hold its schools accountable, including, in most cases, how it will calculate annual summative school ratings.
Unfortunately, many of the first batch of plans overemphasize “status measures” that are correlated with pupil demographics and/or prior achievement. Mainly they rely on test-based academic “proficiency” and (for high schools) graduation rates. While a certain amount of weight should be placed on such indicators, overweighting them in summative grading systems will cause almost all high-poverty campuses to fail for the simple, beyond-their-control reason that the pupils they enroll tend to enter school behind their more prosperous age-mates.
Growth measures, on the other hand, are more poverty-neutral gauges of school performance. They look at the trajectory of achievement over time, regardless of where students start the year. Such metrics should be the primary component of annual school ratings. What policymakers should care most about when evaluating schools is whether they’re giving their students an upward boost.
Consider Fordham’s home state of Ohio. A bill recently passed by House lawmakers and now under consideration in the Senate includes language that would place more weight on student growth measures when calculating ratings for charter-school authorizers, a key part of the state’s multifaceted effort to clean up its long-troubled charter sector. The provision requires that 60 percent of the academic portion of authorizer evaluations be based on student growth measures (a.k.a. value added), instead of 20 percent as under current policy.
The provision, however, shouldn’t apply to authorizers alone; it should be applied to Ohio’s district schools as well as charters themselves. Figure 1 shows how each of these weighting systems—20 versus 60 percent weight on growth—are likely to play out for Columbus school district when the state begins to assign its schools overall A–F grades in 2017–18. Columbus is Ohio’s largest school district and educates primarily low-income and/or minority students.
Figure 1: Projected summative A-F grades for Columbus district schools based on current and alternative weights
Based on my calculations, the top horizontal bar indicates that under Ohio’s current weights (i.e. 20 percent growth and 80 percent status measures) the majority of schools operated by Columbus will fail. A whopping 94 percent will receive a D or F rating and none will earn A’s. This is what happens to high-poverty schools when summative ratings rely too heavily on measures that correlate with pupil demographics.
What would the Columbus ratings look like if state lawmakers placed 60 percent weight on growth? The lower bar in Figure 1 shows the grade distribution under this scenario. The percent receiving D’s or F’s goes down to 76 percent—still a large majority but less than under the present calculus. Once again, no school gets an A. But—importantly—24 percent of them receive B or C ratings, a more believable picture of school performance in the city. Because growth measures don’t sentence high-poverty schools to low grades—any school can demonstrate solid growth—it’s within the control of the district to increase the number of its decent-to-high-performing schools.
* * *
School rating systems that over rely on status measures have significant problems. Assigning virtually all high-poverty schools D’s and F’s are not helpful to parents or policy makers. Consider the perspective of a Columbus parent seeking a good school for her child. Policy makers hoping to encourage families to weigh academic performance in their decisions have done them few favors by rating more than nine in ten schools a D or F. Persistently low ratings for high-poverty schools, even those in which students are making solid gains, can also demoralize educators. It can lead to disinvestment in relatively effective schools while over-identifying “failing” schools, leading to invasive interventions and potentially harming schools whose students are making significant educational progress. Such a system can also fail to identify truly abysmal schools that are the most in need of help (or possibly even closure). These are serious unintended consequences of a system that is biased against schools serving primarily disadvantaged pupils.
Fortunately, there is a smarter way forward—a rating system that places more emphasis on growth over time. This approach will assist families—everywhere, not just in Columbus or in Ohio—to discern school quality. It will establish more productive incentives for educators to help all their students—low and high achievers alike—make academic progress. Mindful of these advantages, states like Colorado and New Mexico are placing greater emphasis on growth. Now it’s time for the rest of the country to do the same.