Over a year ago, when Secretary Spellings invited all states to apply for a new pilot program to use growth models in their accountability systems, she included ??several requirements, one of which was "A growth model proposal must... ensure that all students are proficient by 2014." This week's Education Week commentary on growth models spells out some of the repercussions of that fateful requirement. In it, Michael Weiss clarifies the difference between status models, value-added models, and projection models (the latter used by most states participating in pilot).
I'll pause now for the vocabulary portion of our lesson...
Status model: holds that schools must bring, say, a low-performing 3rd grader up to proficiency by the end of the year for the school to receive credit for her performance, regardless of initial achievement (i.e, the NCLB model).Projection model: holds that schools receive credit if ??learning gains are sufficiently large enough that a student appears to be on track to become proficient by say, 6th grade, regardless of initial achievement.
Value-added model: measures schools' relative effectiveness by accounting for students' initial achievement levels using multiple years of ??test score data.
All three of these models are problematic. Folks don't like the status model since it doesn't take initial achievement into consideration. Essentially the same problem exists for projection models with the added challenge of having to estimate growth. Regarding the latter, Weiss cites the Florida example. That state assumed a linear trend for student growth (meaning students will continue gaining at the same rate of growth); when in fact, students' development was curvilinear (meaning students made significantly smaller learning gains as they progressed through the grades, which is not unusual). Consequently, we're told that Florida's projection model identifies many students as on track to become proficient when they will actually not make it. Finally, there are problems with value-added models as well, like not adequately addressing missing data, among myriad other problems (see here and here).
Weiss explains that value-added models "are not allowed under the growth-model pilot program because they don't adhere to the core principle of NCLB--to bring all students up to proficiency." (Clearly this 2014 deadline is problematic for a number of reasons that scads of people have pointed out, so I won't go there.) I was under the impression that this was the flexibility granted to the pilot states, but no, it's flexibility with a big, fat string attached. Apparently I'm not the only to have made that assumption--we're told most folks equate value-added models with growth in accountability systems. Not the case with NCLB.
The author hypothesizes that the reason that we haven't seen big differences between the status models used in most states and the projection models used in the pilot states is because they both operate under the fixed-proficiency-target notion.
To be sure, we've all been bombarded with news about the magic and allure of growth models. Countless conferences have been convened on the topic. Yet, we're still on a steep learning curve when it comes to understanding and using them wisely and appropriately. Weiss succinctly describes the tension among the models this way:
The dilemma over which measure of school performance to use highlights an inherent tension when designing an accountability system for schools, one between the desire to compare their relative effectiveness (value-added models) while simultaneously holding them accountable for bringing all students up to high achievement levels (status or projection models). Some people thought that the pilot program's projection models were a happy middle ground. Unfortunately, projection models don't address the essential tension between status and growth. They are just the same old status-model wine in a new bottle.
I appreciated how Weiss laid out the issue here. The assumptions behind the models and the methodological questions he raises are the right ones for us to be wrestling with. It boils down to the primary purpose of the model and how results will be used. He ends up saying value-added is the way to go, despite its flaws.??I can't say at this point whether I agree with him or not. I need to go to a few more conferences on it first...