Youngstown City School’s CEO Krish Mohip recently announced significant changes to how his district will evaluate its teachers.
Under Mohip’s new system, 50 percent of a teacher’s evaluation score will be based on classroom observations of instruction and 50 percent will be based on student growth. So far so good; on its face, this is identical to the state’s original evaluation framework. The difference between the two is how student growth is measured. Under the state’s system, value added scores and vendor assessments are supplemented, when needed based upon the subject and grade level taught, by locally determined measures. Mohip, on the other hand, plans to evaluate individual teachers based on the entire district’s progress using only one student growth measure—shared attribution.
For those who are unfamiliar, shared attribution is the practice of attributing value added scores—which are largely determined by ELA and math state tests in grades 4-8—to every teacher in a school or district, regardless of the subject or grade level a teacher teaches. For instance, since a sixth grade social studies teacher does not have a state test that produces value added results, her student growth score would be based on how her students performed on their reading and math tests. Under Mohip’s plan, each teacher would be held accountable for the entire district’s progress—meaning that the aforementioned sixth grade social studies teacher wouldn’t just be evaluated based on her sixth grade students’ ELA and math scores, but on the value added scores of every student in the entire district.
This is nuts.
In theory, I suppose, shared attribution could inspire a collective, “we’re all in this together” culture. Unfortunately, that’s unlikely to happen. Despite what the word “shared” implies, shared attribution doesn’t actually ensure that teachers share accountability—just that core teachers with value added data are responsible for the evaluation scores of non-core teachers (like gym, art, and music) in addition to their own. Statewide, the number of teachers who can actually be evaluated based on value added measures is small: only 20 percent of Ohio teachers are able to be measured using the state assessment either in whole or in part.[1] If that percentage holds in Youngstown, that means the majority of Youngstown teachers will have no significant influence over half of their final evaluation rating.
Evaluating anyone, including teachers, on results they can’t directly influence is unfair. So why force it onto an entire district? Mohip asserts that it will help “motivate teachers to help students in areas in which they need the most support—even if that’s not their subject.” But the logic behind this statement is faulty at best and ignorant at worst. For starters, it elevates reading and math above all other subjects. While no one would argue that learning to read and do math is not a vital part of a student’s education, so too is the content learned in science, social studies, and other classes. Recent reports and studies of other states have shown that adopting a high-quality, content-rich curriculum may be the best step toward improving reading and math achievement—arguably far better than expecting non-core teachers to teach content they aren’t experts in. Mohip would be better off focusing on what teachers are teaching and how they are teaching it rather than trying to force art and gym teachers to teach reading strategies.
But a narrowed curriculum isn’t the only negative consequence of this new evaluation system. Historically, teacher evaluations have aimed to accomplish two goals: holding teachers accountable for their performance and helping them get better through feedback. Mohip mentioned both of these aspects when unveiling his new system—he noted that he wants to provide teachers with more “feedback that’s going to be able to help [them] become a better teacher” and that the system is about “holding everyone accountable.”
But the new system does neither of these things. District-wide shared attribution risks inflating or deflating the overall evaluation score of teachers depending on whether the district does well on value added. Half of all teachers’ score will be exactly the same—which could potentially hide meaningful differences between teachers.
As for professional development: The feedback teachers receive will come from classroom observations, not from student growth measures. Unfortunately, other than a vague reference to creating the “professional development path every teacher needs,” there’s no clear indication of how the district plans to improve the feedback that teachers previously received under the state evaluation system. This is particularly problematic considering that the state’s rubric and template for professional growth plans are woefully insufficient. The Educator Standards Board recently made some solid recommendations for how to improve the rubric, but those recommendations are virtually useless to Mohip and his staff, since they suggest incorporating content-specific student growth measures into the rubric and getting rid of shared attribution altogether.
Most folks would agree that Youngstown’s long history of poor academic performance is more than enough reason for Mohip to incorporate some drastic policy changes. As CEO, he is certainly entitled by law to do so. But great power also brings great responsibility, and Mohip’s chief responsibility is to improve the education provided to the thousands of students in his district. Plenty of research has shown that quality teaching is necessary for students’ achievement and positive labor market outcomes, so Mohip is right to focus on teacher accountability and improvement. But he’s wrong to assume that shared attribution is the best way to accomplish these goals. Not only will it fail to effectively and fairly differentiate teachers, it will also lead to a narrow focus on reading and math at the expense of other subjects—a move that the students in Youngstown just can’t afford.
[1] The 20 percent is made up of teachers whose scores are fully made up of value added measures (6 percent) and teachers whose scores are partially made up of value added measures (14 percent).