After months of debate, state lawmakers continue to mull significant changes to Ohio’s school report card system. Two vastly different proposals to overhaul the report card framework have emerged (House Bill 200 and Senate Bill 145). We at Fordham, along with several other education groups, have thrown our support behind SB 145. That proposal makes responsible course corrections, while still maintaining a strong focus on all students’ academic outcomes, and adopting an intuitive five-star rating system. At the same time, we have also voiced serious concerns about the House legislation. That proposal would bury critical achievement and college-and-career-readiness data and hide the ball from parents and the public through the use of technocratic jargon.
In testimony in front of the Senate Education Committee last week, the education establishment tried to defend its support of House Bill 200 and attempted to address some of the criticisms of that legislation. Speaking on behalf of various public school groups, Kevin Miller of the Buckeye Association of School Administrators, took issue with Fordham’s characterization of HB 200’s rating system as “actively misleading.” He also sought to explain the exclusion of an overall, or “summative,” school rating in the House plan.
With the summer legislative recess coming soon, there may not be another hearing on school report cards this spring to address their claims in committee. As such, we offer this response to Miller’s arguments.
Why the HB 200 descriptive ratings are misleading
The House plan moves Ohio away from the widely understood but much-maligned (at least by education groups) A–F rating system to a descriptive labelling system. The six proposed ratings are as follows: significantly exceeds expectations, exceeds expectations, meets expectations, making substantial progress toward expectations, making moderate progress toward expectations, and in need of support.
In earlier Senate testimony, my colleague Chad Aldis noted that the “making moderate progress toward expectations” descriptor—equivalent to a D rating—would be “actively misleading.” He is absolutely right. To illustrate the problem more concretely, consider some hypothetical data for the performance index (PI)—a measure of pupil achievement—as well as the graduation rate. Under HB 200, schools could actually register worse results compared to a year prior and receive a rating that indicates they are making “substantial” or “moderate progress” on that measure. Take the case of the (fictional) Washington Elementary, a school that receives a PI rating of “exceeds expectations” in 2021. Even if its PI score drops by ten points between 2021 and 2022, it could still receive a “making moderate progress” rating under HB 200. The same type of phenomena would occur in the graduation component where schools see a decline in graduation rates yet receive a rating indicating progress over time.
In sum, a school could be regressing—with fewer students meeting state expectations—but the rating labels would tell the public that it’s improving. How is that not actively and knowingly misleading to Ohioans?
Table 1: Illustration of how poorly crafted labels can mislead the public
The administrator associations didn’t actually respond to the concerns about misleading labels or offer other ideas that would better communicate performance. Rather, they pointed to Massachusetts’s report card, which uses this terminology. Indeed, the Bay State does use language similar to the House plan, including “moderate progress toward targets.” But there’s an important difference. As shown below, in Massachusetts’s report card, schools are ranked from 1–99 using an “accountability percentile” that shows where along the spectrum in statewide performance the school stands. Moreover, when a school is low performing, there is even a clear notice to the public that the school is near the bottom statewide. Neither of these features are in the House plan.
Source: Massachusetts Department of Elementary and Secondary Education.
It’s always possible that the administrator associations missed this part of the Massachusetts report card. But one suspects that they may have cherry picked elements of the report card they liked and omitted those they didn’t (ranking schools has been routinely decried by the establishment). Whatever the case may be, the bottom line is that it’s dishonest to suggest that all the House plan does is simply mirror the Massachusetts report card.
Why overall ratings matter
Another source of disagreement between the House and Senate bills is the overall rating, which summarizes performance across the various dimensions of the report cards. The House scraps this rating, while the Senate continues Ohio’s longstanding policy—and Massachusetts’s too!—by maintaining an overall mark. In his testimony for the administrator groups, Miller argued that the absence of an overall rating would “dilute the significance of the component ratings,” which he deemed “more important” than the overall rating. He is of course right that the component ratings are critical pieces of the report cards. They help users—particularly school officials who are apt to understand each measure—take stock in the strengths and weaknesses of individual schools.
That said, we must recognize that most Ohio parents and citizens don’t work in education. For this audience—arguably the primary audience of the report card—they deserve a user-friendly summary that doesn’t require them to comprehend the intricacies of each component. In surveys commissioned by Ohio Excels, about two in three Ohio parents favor a summary rating. That’s no surprise considering the widespread use of overall marks in other contexts that give users a straightforward, bottom-line summary. In schools, students receive cumulative grade point averages that capture their performance across various subjects. In business, companies’ creditworthiness are summarized though bond ratings. Hospitals and preschools receive star ratings, much like those proposed in SB 145 for Ohio schools and used in other states’ report cards. Again, these star-rating systems combine various indicators so that the public gets a general sense about the quality of services. While the component ratings are important, the general public likely sees far more value in an overall mark.
* * *
The House and Senate report card bills would put Ohio on very different paths when it comes to school accountability. The House version of report card “reform” is far more accommodative to the public school system, as it softens measures and uses euphemisms that paper over low performance—and even actively mislead the public. This type of system isn’t fair to property-taxpayers who deserve an honest evaluation of their local schools when they vote on school levies. Worse, it’s unfair to Ohio parents who are simply searching for a great school that works for their kids.
The Senate legislation is truer to core principles of school accountability. Its approach includes a fair, evenhanded assessment of school performance that is then communicated to the public through a transparent rating system. With any luck, the General Assembly will discount the faltering arguments of the school establishment, and instead look towards a solution that puts students, parents, and citizens first.