The Senate and House finally reached a compromise over changes to Ohio’s teacher-evaluation system (OTES), which, in its first year of statewide implementation, has drawn criticism from school leaders arising from what they say is its administrative burden. Some felt that, as a result of its classroom-observation mandates, principals may not have time to properly support any teacher, let alone those who struggle.
This journey began with a Senate bill passed back in December (Senate Bill 229), which continued with the House Education Committee proposing major changes—followed by weeks of debate on the competing versions. (A comparison of the two bills can be found here, and our analysis of the House bill is here.)
The compromise ended up in House Bill 362, which originally dealt with STEM-school matters. It now awaits Governor Kasich’s signature. Major changes include giving districts the option of changing the percentage of an evaluation tied to teacher performance and student growth from 50 percent to 42.5 percent each; providing districts with several different ways to make up the remaining 15 percent, including (but not limited to) student surveys; and allowing districts to be flexible with the observation frequency of top-rated teachers.
Everyone loves a happy ending. But as a former teacher, this bill leaves me with several lingering questions, as does OTES itself.
First, this has been the first year of OTES implementation for most Ohio districts. End-of-year test results won’t even be published until later this summer. So why were legislators in such a rush to change a policy that hasn’t yet run its course? Until results are released from all districts—not just preliminary findings from pilot districts—no one knows what the distribution of teacher ratings looks like under the original scheme. If the vast majority of them receive the highest or lowest possible rating, then lawmakers will need to alter the formula again. Unfortunately, the General Assembly didn’t wait to find out. Patience might have led to more suitable and durable changes.
Second, I have a hard time understanding why highly rated teachers should be evaluated less frequently than less-effective teachers. I had the opportunity to work alongside some phenomenal teachers in Memphis, Tennessee, and every single one of them would tell you that one of the most frustrating aspects of their job is a lack of consistent feedback on their performance. Teachers—all teachers—want to get better, and feedback from school leadership is essential. Teachers need content experts to help them think through curriculum planning and delivery, and they need colleagues who can spot questionable classroom practices that the instructor herself might not be aware of. So much emphasis is placed on the punitive aspect of teacher evaluations that it’s easy to ignore their potential to nudge teachers toward higher levels of performance— even the really good ones. While reduced observation seems to take the pressure off teachers (and principals), it actually limits teachers’ opportunities for professional growth.
Third, because OTES requires just two formal classroom observations annually, both announced beforehand, teacher evaluations are less reliable (and helpful) than they could be. How about those who aren’t at the top of their game? What about newbies? Tennessee has an observation component that treats experienced and inexperienced teachers differently. As a beginner, I was observed six times by multiple evaluators (the principal, assistant principal, content specialist, and instructional facilitator), who were all trained and certified by the Teacher Advancement Program. The variety in evaluators lowered the chances for bias and made it easier for my principal to juggle a large staff. My observations varied in length (anywhere from fifteen minutes to an entire lesson), were both announced and unannounced, and required post-conferences, during which I was able to sit down with my evaluator and discuss my ratings and my areas for growth.
If we really want evaluations to help teachers (especially new ones) grow, two observations will simply not do. Furthermore, OTES only requires those observations to last thirty minutes, which doesn’t always cover an entire lesson. How can evaluators rate the delivery of a lesson if they don’t even watch the whole thing? Finally, requiring only preannounced observations instead of a mix of announced and unannounced means that teachers can carefully plan and practice their lessons so they are near perfection when the evaluator walks in. But what about the rest of the curriculum and the school year? A solid observation looks at a teacher on an “average” day, not just when she prepares a “show.” Believe me, I put on plenty of shows when I was teaching. But the observations that I learned the most from were those I hadn’t known would occur.
If the Ohio General Assembly wants OTES truly to help teachers improve, lawmakers should have waited until the final results came out, asked teachers and administrators from all districts what worked and what didn’t, and then made the proper changes.