Author’s correction and update: The original version of this post incorrectly stated that Columbus had increased the minimum test score needed to be classified as “on track” in third-grade reading and to not be placed on a reading improvement and monitoring plan (RIMP). In fact, according to information presented to the school board on Oct. 15, the district has increased only the “cut score” for RIMP eligibility but not the threshold for being identified as “on track.” This means the district is not engaging in the most egregious form of state report card manipulation, as the piece originally suggested, but several other Ohio districts do appear to have done so.
EDITOR’S NOTE: The Thomas B. Fordham Institute occasionally publishes guest commentaries on its blogs. The views expressed by guest authors do not necessarily reflect those of Fordham.
As I often like to remind my Ohio State University students, the road to hell is paved with good intentions. One illustration of this aphorism is how well-meaning efforts to increase accountability pressures on public schools in hopes of improving learning can backfire by perversely incentivizing school administrators to take actions that only end up hurting students. Although we’d hope that school leaders would respond to greater accountability by taking steps to improve educational quality—as they should be doing all along—it seems that local officials often react by finding ways to game the system instead.
Ohio’s current school report cards—which assign letter grades to each of the state’s public and charter schools and districts—are vulnerable to precisely this kind of gaming, as evidenced by recent efforts by Columbus leaders to raise the district’s report card grade without actually improving student performance.
At issue is a section of the report card titled “Improving At-Risk K-3 Readers,” which grew out of a 2012 law requiring all Ohio third graders to demonstrate proficiency in reading before advancing to fourth grade. The intentions behind this law were quite well-meaning and sincere; there is clear evidence that mastering reading by the end of third grade is a strong predictor of future academic success. This report card section is designed to hold schools and districts accountable for ensuring that all students stay on track to become proficient readers by the time they take their third-grade state exams.
Specifically, districts are required to administer reading assessments starting in kindergarten to identify students who are not on track to reach proficiency and to put in place reading improvement and monitoring plans (RIMPs) designed to get these students back on track. The “Improving At-Risk K-3 Readers” component of the state report card essentially awards points for each initially at-risk student whom schools move back on course to achieve proficiency.
Since state testing does not begin until third grade, school districts have considerable discretion in how they implement these requirements. Fortunately, there are some guardrails built in by the state. For example, school districts may use only the diagnostic assessments that have been approved by the Ohio Department of Education, and for each available assessment, the state also sets minimum scores that districts must follow in flagging “off-track” students. In addition, districts and schools are penalized for each student who fails to achieve proficiency on the third-grade reading test but who was not previously placed on a RIMP.
While these safeguards prevent districts from setting their bar too low (which would help them avoid losing points for at-risk students who remain off-track in earlier grades) and incentivize identifying students likely to struggle on the third grade reading assessment, Columbus has figured out that it can improve the calculation by setting the bar too high. Under a directive issued by the district’s new superintendent, third graders will be placed on RIMPs even if they are reading at the sixth-grade reading level at the beginning of the year! Indeed, under the district’s new minimum reading “cut score,” third graders scoring below the 93rd percentile nationally would be placed on such improvement and monitoring plans.
How does setting the bar so ridiculously high help inflate the district’s grade? Putting nearly every student on a RIMP at the beginning of the year will avoid the penalties for the subset of these students who not already identified as “off track” but who fail to attain proficiency on the state exam, a deduction that cost the district more than 400 points on last year’s report card calculation
When confronted by teachers angry that they must now write RIMPs for many more students, district officials denied trying to game the system and claimed there were merely “raising the bar” and trying to help all students improve their reading, even high-performing students. However, the district’s other actions (or lack thereof) suggest that this explanation should be taken with skepticism. The superintendent increased only the district’s third-grade reading requirements, leaving the minimum diagnostic “cut scores” that identify students in need of reading intervention woefully low for kindergarten, first, and second grade. At the most recent school board meeting, one board member asked (at 3:24:00 of the video) what to tell community members who accuse the district of making the change solely to improve the district’s report card. “A concise answer for me,” responded the district’s chief accountability officer, “honestly, would be, ‘Let’s give the students the interventions they need to succeed and achieve.’”
Unfortunately, what looks like an effort to “juke the stats” is likely to come at the expense of student learning. Third grade teachers estimate that writing each student’s RIMP takes between thirty and forty-five minutes, not counting the time necessary to administer ongoing assessments also required for each student placed on a reading plan. Requiring teachers to complete the required paperwork and follow this process for their highest-achieving students, who are already reading well above grade level, is likely to come at the expense of time and resources that could otherwise be spent helping the district’s lowest-performing third graders, who are most at risk of getting retained.
In addition, the policy change has also created mass confusion among both teachers and parents, many of whom assume that every student placed on a RIMP has also been identified as “off track.” (Both the coverage in the Columbus Dispatch and the original version of this blog post repeated this mistake, likely further exacerbating the confusion.)
The attention the district’s decision attracted has also revealed an important weakness in the calculation used to assign grades under this section of the report card. Had the district also raised the cut score for identifying “off track” third graders to the same high level (as early reports indicated), it would’ve earned even more points on the state report card. Labeling more high-achieving, already-proficient third graders as “off track” in the beginning of the year would have let the district earn credit for moving the same students back “on track” when they passed the state’s reading exam. This would result in a better report card grade without actually improving student reading.
Columbus did not take the additional step of increasing its third grade on-track cut score, but it appears that a handful of other districts may have. Examining last year’s state data, at least three other districts saw a big jump in the share of students identified as being off track between second and third grade. At the same time, these districts also brought larger fractions of third graders back on track than had occurred for the same cohort in second grade. This pattern—more kids classified as behind in the fall, but also more brought back on track during the year—strongly suggests that the districts changed their cut scores between second and third grade and had set a higher bar in third grade than was necessary to pass the state test later in the year.
In an ideal world, the state accountability system would not incentivize such numbers games. And all school districts would have leaders who spend their time and ingenuity dreaming up ways to actually improve student learning, instead of coming up with new strategies for manipulating the district’s report card grades.
Vladimir Kogan is an associate professor at the Ohio State University Department of Political Science and (by courtesy) the John Glenn College of Public Affairs. The opinions and recommendations presented in this editorial are those of the author and do not necessarily represent policy positions or views of the John Glenn College of Public Affairs, the Department of Political Science, or the Ohio State University.