Last month, the Ohio Department of Education (ODE) released the value-added achievement test data for the state's public schools. This data, from the 2006-07 school year, shows student academic growth (in math and reading, grades four through eight) over time. ODE rates buildings and districts in three categories: green (exceeds state-level growth expectations), yellow (meets state-level growth expectation), and red (does not meet state-level growth expectations). Beginning this August, with two full years of data available, the value-added measure will appear on state report cards for schools and districts, and this growth metric will have an impact on a school's overall academic rating. Schools or districts with two consecutive years of growth that exceeds state expectations will move up a rating, while schools with three consecutive years failing to meet growth expectations will move down a rating.
In the meantime, there are at least three takeaways from the first round of data.
First, value-added data reaffirms that the ability to attain academic gains is independent of poverty or minority status. While these factors may be challenges to educating students, they are by no means excuses for failure. For example, "rural poor" districts are outpacing other types of districts in overall value-added gains, and charter schools outpace their traditional counterparts, especially in adolescent literacy. (Charter advocates are making hay of these inaugural results, as seen here.).
Second, this data raises questions about how to define "success" in a school. Some schools with high levels of overall achievement, many of these are schools in wealthy districts, are failing to show adequate growth over time; while some lower achieving schools are showing gobs of growth (kudos to the Columbus Public Schools, for example). Schools showing no achievement and no growth over time ought to be exposed, and schools showing high value-added gains should be studied for promising practices for replication elsewhere. All teachers, students, parents, and taxpayers would benefit from knowing what is happening in their schools.
Finally, it is fair to ask--as a few State Board of Education members did at this month's meeting--whether the state's achievement tests are too easy or too difficult in certain grades or subjects? And, might these discrepancies lead to poor value-added results? For example, fully 50 percent of districts, and 30 percent of charter schools, are rated "red" in fifth-grade math. At first blush this could be cause for alarm, but an October report by Fordham argued that Ohio's math tests peak in difficulty at grade five and are unevenly calibrated across the grades. While encouraging schools and teachers to reevaluate their instructional strategies based on value-added data in order to improve student achievement, state policymakers ought to take a fresh look at test cut scores and the calibration of the test across grades in light of this new information.
Ohio is to be commended for its leadership on moving toward a richer view of student achievement, and it is right to go slow here and to make improvements along the way. Much will be learned in the coming years from this data that will reshape our thinking about student performance, school performance, and even the impact of poverty on student achievement. This is all for the good.