In March, Ohio’s Educator Standards Board (ESB) released six recommendations for revising the Ohio Teacher Evaluation System. In a previous piece, I explained why its two most significant recommendations are a solid solution to a myriad of problems within the system. These suggestions were 1) to update the observational rubric in collaboration with a national expert and 2) to embed student growth measures into the revised rubric. In this piece, I’ll investigate the remaining proposals.
Of the four remaining recommendations, two are intertwined with the ESB’s call to embed student growth measures into a revised rubric. The first is to eliminate shared attribution, the flawed practice of evaluating non-core teachers based on test scores from subjects they don’t actually teach, such as reading and math. Policymakers should heed this recommendation and ditch shared attribution as soon as possible.
The other recommendation seeks to incorporate aspects of Ohio’s current alternative framework into the newly revised observational rubric. This includes student portfolios, student surveys, peer review, self-evaluation, and other district-determined measures. Several of these methods—like student surveys and peer observations—have research to support their use. The revised evaluation rubric should definitely include these options.
The final two ESB recommendations require some more in-depth discussion. Let’s take a look.
Streamlining the observation process
Under the current system, the timeline for the classroom observation cycle doesn’t explicitly differentiate between first and second semester requirements. And although the current system does refer to different types of observations—formal observations and informal classroom walkthroughs—there isn’t a clear explanation of the differences between the two or the purposes for each. In addition, current law does not require pre- or post-conferences between the evaluator and the teacher. This is unfortunate because conferences—particularly those that take place after an observation—are the ideal opportunity for teachers to determine how to move forward with their development.
The ESB fixes these issues by offering a clear, chronological description of required observations and conferences and explaining the purpose of each. It calls for two formal classroom observations; periodic, informal “walkthrough” observations;[1] and a final conference to discuss teacher performance against their goals. Here’s a summary:
- Teachers develop their growth or improvement plans at the start of the year. Once plans are complete, an evaluator conducts an announced, formal, and comprehensive observation. This observation lasts for a minimum of thirty minutes and evaluates the teacher on all components of the teacher performance rubric. In the required post-observation conference, the teacher and evaluator decide on specific areas for growth to focus on for the upcoming year.
- After the first formal observation, the evaluator completes periodic walkthroughs that last thirty minutes or less. Unlike the vaguely defined walkthroughs within the current system, walkthroughs in the newly proposed system focus on assembling data related to the areas for growth identified during the first post-observation conference. Walkthroughs continue into the second semester.
- The second formal observation occurs during the second semester and, unlike the first formal observation, can be announced or unannounced depending on a district’s preference. Although it lasts for a minimum of thirty minutes and is the final formal observation of the year, it only focuses on the specific areas for growth that were identified at the start of the year.
- At the end of the year, evaluators and teachers meet for a summative conference in which they discuss all the data and observations that occurred during the cycle. During this conference, a new area of focus for the following year is identified.
This setup is a significant improvement on the current system. It creates a more holistic feedback system that connects each observation to the next and to an overarching improvement goal. In the current system, observations are isolated—they’re similar to the one-off professional development sessions that many teachers find unhelpful. By allowing teachers and evaluators to jointly identify areas for growth at the start of the year, and then focusing on those specific areas throughout the year during a variety of observations, the system becomes far more coherent and meaningful. The ESB proposal also requires post-observation conferences. This is another move in the right direction, since as the Ohio Department of Education has said, “growth comes from the conversations about practice between observer and teacher.”
Exempting highly rated teachers
During the summer of 2014, a change in law allowed less frequent observations for teachers who received the two highest ratings. This change only applied to teachers whose student academic growth measure, which is determined by student test scores, was average or higher during the most recent school year. Teachers with an “accomplished” rating (the highest rating) could be evaluated once every three years, and teachers with a “skilled’ rating could be evaluated once every two years. In both cases, teachers are still required to receive at least one observation and conference each year.
In their recommendations, the ESB suggests that this practice be maintained with a few key exceptions. They are:
- Teachers rated “accomplished” or “skilled” would only be required to have a conference to “discuss goal progress” each year—there is no mention of an accompanying observation. The proposal does say that evaluators will “review evidence of the teacher’s professional growth,” but without a clear description of what that review entails—and whether it must include watching the teacher actually teach—it’s possible that some teachers could go years without a single observation of their practice.
- During the years between a full evaluation, teachers who are rated accomplished are permitted to submit a “teacher-directed professional growth plan to an evaluator chosen by the accomplished teacher.” Translation: These teachers get to design their own growth plans and pick the evaluator who will hold them accountable for following said plan. That hardly seems like the most objective way to evaluate progress.
The rationale behind these recommendations—and behind the original law change—is understandable. Highly effective teachers should be trusted with more freedom and responsibility. And shrinking the number of full-cycle evaluations that principals must conduct each year does lessen administrative burden.
But the ESB recommendations are about transitioning from an ineffective evaluation system to an effective teacher feedback system. Revising the observation rubric, embedding student growth measures that are more closely aligned with classroom practice, creating a more holistic observation cycle—these are changes aimed at giving teachers more and better feedback. Exempting highly rated teachers from a system designed to give them the frequent and meaningful feedback they deserve doesn’t just contradict these other recommendations, it’s unfair to the teachers who want every opportunity to get better. Sure, principals and other evaluators can still stop in and give feedback. But will they?
There’s also the not-so-small matter of how many teachers would be exempt. Back in 2013-14—the only year that Ohio’s evaluation system was actually used as intended—approximately 90 percent of Ohio teachers were rated either “accomplished” or “skilled.” If the ESB recommendations become law, and this percentage holds, that means that the overwhelming majority of Ohio’s teachers will have very limited interactions with the new system. And if that’s the case, what’s the point of having one at all?
***
Overall, the ESB’s recommendations are solid suggestions. Eliminating shared attribution and streamlining the observation cycle into a more holistic and coherent system are great ideas that could lead to a lot of professional growth for teachers. But there are devils in the details. Exempting highly rated teachers from the system jeopardizes their possibility for growth and robs teachers of what could be their only opportunity to receive meaningful, observation-based feedback each year. As the General Assembly deliberates how to revise the teacher evaluation system moving forward, they would be wise to follow all of the ESB’s recommendations except this one.
[1] Under the current system, walkthroughs may be unannounced and completed frequently. The ESB makes no mention of changing this policy.