There is little dispute that information about the academic gains students make (or don’t) is a valuable addition to pure student proficiency data. But there is little agreement about how best to calculate growth and how to use it to inform things like teacher evaluations and school rating systems. The latter is important – especially now in Ohio. While many local educators believe Governor Kasich’s plan to overhaul how Ohio’s districts are graded gives too little weight to academic progress (and too much to achievement), the truth is that the limits of our current value-added system indicate that the governor’s formula is just right, for now.
Under the governor’s initial version of Senate Bill 316, Ohio would move to an A to F school rating system with ratings calculated based on four factors: 1) student achievement on state tests and graduation rates, 2) a school performance index based on state test results, 3) student academic progress, and 4) the performance of student subgroups.
Matt Cohen, chief researcher for the state education department, testified last month to the Senate education committee that feedback from the field indicates they want growth (aka “value-added” in Ohio) to count more heavily than 25 percent. Bill Sims, CEO of the Ohio Alliance for Public Charter Schools, suggested that value-added data account for half of a school’s rating – or that ratings be “bumped up” one level if a school exceeds the state’s value-added expectations. Columbus City Schools Superintendent Gene Harris made a similar suggestion during her testimony.
But, considering how few students for whom the state has value-added data, counting that as half of a district’s rating, or even an individual school’s rating, is going too far.
Ohio’s current value-added system measures student progress in grades four through eight, and only in English language arts and mathematics. Just 36 percent of Ohio public school students are enrolled in grades four, five, six, seven, or eight – meaning that the state has value-added data for a bit more than one-third of all students. At best. Student mobility among school districts impedes the ability to calculate gains; it’s quite plausible that the state doesn’t have growth data for even one-third of students. Further, there are no value-added data in Ohio for science, social studies, writing or other subjects taught to Ohio’s students. (The state has achievement test data for more than half of all students and across science and social studies.)
The progress of roughly one-third of students in two subjects shouldn’t make up half of a district’s academic rating; counting it for 25 percent of the overall grade sounds just about right. Down the road, however, a fair argument could be made to weigh growth more heavily in a rating’s equation as new grades and subjects are added where growth can be gauged.
After the transition to the Common Core academic standards and tests in 2014, Ohio should be able to calculate value-added data for high school students. And as the collection of education data continues to improve, we ought to be able to calculate gains for even the most highly mobile of students.
Ohio could also consider other ways to use growth to inform ratings. For example, Florida (which has these data through tenth grade) weighs growth as half of a school’s rating, but not in the same simple fashion Ohio educators are suggesting. One-quarter of a Sunshine State’s school rating is based on overall student progress, and one-quarter is based on the progress made by the bottom 25 percent of students – meaning even the highest performing districts can’t afford not to focus on its lowest performers. This approach makes limited progress data more meaningful.
Student academic progress is important, and Ohio has been a leader in calculating and reporting progress data. But our growth measure, as it looks today, isn’t of the scope and scale needed to account for half of a district or school rating.