???????Value-added??????? measures of academic progress are undeniably important, and we've cheered their addition to school assessment and accountability systems.???? At the school level, knowledge about how much progress a child is making can help teachers and school leaders make smart decisions about instruction.???? From an accountability perspective, value-added goes beyond pass rates and shows policymakers and community members whether a school is making a higher than expected amount of academic progress each year (or conversely, whether it isn't moving students forward much academically from year to year).????????
This is all good, to be sure, but what if the value-added system is flawed?
Cleveland State University's Douglas Clay explores this question in a guest editorial in this week's Ohio Education Gadfly and presents his two major concerns with Ohio's system:
- The tests aren't properly aligned, meaning that, for example, the fifth-grade math test is harder for the average fifth grader than the sixth-grade math test is for the average sixth grader.???? Thus, value-added results in those grades and subjects may not accurately reflect how much students actually learned. (Our 2007 report The Proficiency Illusion examined this misalignment in most state math and reading tests and supports Clay's assertion.)????
- The state uses the baseline year's test results to gauge current progress, lending to a Lake Wobegon effect wherein 98 percent of students can attain ???????above expected??????? progress in one year ???????? or just the opposite, and 80 percent of students can progress ???????below average??????? annually. Clay proposes adopting a simple, z-score analysis based on the current year's test results.???? This would result in a normal (bell curve) distribution of results.????
Another concern, which Clay doesn't address in this particular editorial, is the NCLB requirement that schools cannot administer ???????out of grade??????? tests to students.???? While tests are designed with ???????stretch,??????? to measure the growth of both low- and high-achieving students, they still have a floor and ceiling.???? A student who reads several years above grade level won't likely demonstrate a year (let alone more) of growth on tests that are based on her grade and are far below her ability level.???? Likewise, a child who is several years behind his same-age peers may be progressing rapidly but this progress may not show up on a test that is aimed far above his ability range.
For all these reasons, Clay is worried about how administrators, teachers, and policymakers use this flawed data:
The real danger is that these results are used for high stakes accountability decisions. Schools with falling achievement scores are spared consequences because their value-added data shows them making strong yearly gains. Conversely, schools with low gains on their value-added data can be penalized because this data show them making minimal yearly gains even though they may have strong overall achievement scores. Certainly some of the results are accurate, but there are surely some ???????false positives??????? and ???????false negatives??????? in the school classifications. These false positives and negatives could lead decision makers to erroneous conclusions about school performance.
Ohio's State Board of Education has long pushed for changes to how value-added is calculated and utilized here, to little avail. Let's hope they keep pushing. Meanwhile, some of these problems ???????? like the requirement to give grade-level tests ???????? could be addressed and solved for all states through the reauthorization of ESEA.