October must be the month for manifestos. Earlier this month there was the ?how to fix our schools? manifesto garnering pledges from 16 major city superintendents (none of which are in Ohio, unfortunately), and then Economic Policy Institute's statement by ?prominent education scholars? warning that reliance on test data to evaluate data is ?misguided? and dangerous. The Ohio Education Association posted the EPI statement on its website, encouraging members to sign a statement ?opposing this approach.? (I find it curious that using student growth in teacher evaluations is lumped together as a single approach. Despite the infinite number of ways a state, district, or school could use growth data to make a determination of teacher effectiveness, say, by using various measures, rolling averages or not, etc. EPI does us the favor of not?having to consider various options and just?labels them all bad.)
Evaluating teachers against student growth data is ?bad? because:
Adopting an invalid teacher evaluation system and tying it to rewards and sanctions is likely to lead to inaccurate personnel decisions, while also demoralizing teachers. Such a flawed system could lead talented teachers to avoid high-needs students and schools, or to leave the profession entirely, and discourage potentially effective teachers from pursuing careers in education. Moreover, heavy reliance on basic math and reading scores to evaluate teachers will further narrow and over-simplify the curriculum to focus only on the subjects, topics, and formats that are tested. We believe that the evidence shows that educational outcomes will suffer if policymakers establish systems of teacher evaluation, tenure and pay which rely heavily on student test scores.
A reasonable response to this might come from Eric Hanushek's recent piece in the New York Daily News in which he describes his support for New York City's release of value-added data to the public despite its flaws. (Check it out ? it's much easier to quote experts like Hanushek who sum it up brilliantly than try to articulate it on your own.)
A second response that comes to mind is from Elizabeth Shaw, executive director of the Louisiana Department of Education's human capital office ? who spoke at last week's PIE Network conference. I'll try to paraphrase her here as best as I can, because what she said in response to a question about whether it's appropriate to use value-added to evaluate teachers was very smart.
Shaw's response spun this issue in a new light for me:
Why are we complaining about using value-added to measure and evaluate (and possibly pay or dismiss) teachers as part of the conversation about overhauling teacher evaluations? It's fundamentally a question of your district's/state's accountability system ? and that's a separate matter altogether.
In other words, if you don't like the accountability system and the data it relies on, then figure out why and address that ? but don't use that as an excuse to prevent improvements to teacher evaluations. ?We've already made serious decisions based on this ?flawed data.? We've closed schools based on this data. We've fired principals and held superintendents accountable based on this data. Children are held accountable according to this data. If the data is good enough to hold all those people and buildings accountable, then why can't it hold teachers accountable? If you want to rethink the accountability system and think the data isn't robustly capturing student performance, then fine. But if the data's good enough to hold everyone else in the system accountable, why not teachers?
(That's a paraphrase.) But isn't this line of thinking a really smart one?
- Jamie Davies O'Leary