For its recent report analyzing the readiness of kindergarten students entering traditional district, charter, and magnet schools in seven Ohio urban districts (see here), Policy Matters Ohio deserves credit for selecting an important research question: are charter students fundamentally different from those attending district schools (more privileged, less “at risk,” more motivated, etc.)? The answer to this question has profound implications for the charter school debate, which continues to rage nationally as well as in Ohio.
Unfortunately, the kudos end here.
The report, “Ready to Learn: Ohio Assessment Shows Charters, Magnets Get a Head Start” examines student scores on the Ohio Kindergarten Readiness Assessment-Literacy (KRA-L) to learn if a child’s school readiness differs according to school type. The author concludes that because charter students in the study score higher on average on the KRA-L than their district peers, policymakers should rethink their reliance on charters as the solution to solving urban education problems, “if charters are getting better-prepared students and producing equal or lower achievement, then they should be scaled back, not expanded.”
This conclusion is flawed on several fronts.
First and perhaps most importantly, the data chosen (KRA-L scores) aren’t capable of telling us whether “charters are getting better-prepared students.” The Ohio Department of Education states that the KRA-L “is NOT a comprehensive measure of school readiness or of children’s potential for academic success” (caps and bold in the original text, see KRA-L policy paper here). Despite a glaring disclaimer from the ODE that the KRA-L is not intended as a measure of school readiness (the author even includes it in the report text), it is used to inform the main finding that charters get a “head start.”
Second, the author’s contention that charters “produce equal or lower achievement” is not based on a comprehensive look at the literature on charter performance. A new study by Caroline Hoxby (see here) has received an enormous amount of attention by charter advocates and opponents alike (see here, here, here, and here). The Hoxby study, which compared New York City students accepted by lottery into charters with district students who applied to the same school but were not accepted, found that charter students scored six percentage points higher in math and five in English than their peers. Perhaps most significant is that Hoxby’s research was based on a randomized experiment, thus meeting “gold” research standards and disproving the argument that charters steal the best students from district schools.
The Hoxby report is not brought up to garner unabashed support for charters, but to show that the literature review conducted for the Policy Matters report was incomplete at best—and intentionally biased at worst. One could just as easily select research such as Hoxby’s to make an inference about why the charter sector in Ohio must be expanded.
There are additional weaknesses to the Policy Matters report. These include:
- The report neglects to mention a critical limit to the sample—not all charter schools in Ohio serve kindergarten students. In fact, of 186 urban charters in the seven cities sampled, 64 of them—or more than one third—don’t serve kindergartners. For charters serving students beginning in later grades, we don’t know much about their readiness or whether those students are fundamentally more motivated or “better” than students in traditional district schools. Despite not having entrance data for one third of the charter students in the cities studied, the author still proclaims that “charter schools are not educating the poorest and most at-risk students.”
- In addition to using KRA-L data inappropriately, the author fails to point out that early childhood testing data in general is notoriously unreliable. The ODE warns educators that an assessment such as the KRA-L, which is delivered in 10-15 minutes and consists of 25 questions, should not be used for “high-stakes” decisions. In order to mitigate reliability problems, assessments sometimes use “bands” or ranges that address testing measurement error. The KRA-L does just that, categorizing student scores into three “bands:” Band 1 (scores 0-13) includes children needing “intense instruction;” Band 2 (scores 14-23) categorizes children needing “targeted instruction;” and Band 3 (scores 24-29) designates children needing “enriched instruction.” The average score for children in district schools is 16.89 and in charters 18.27. These scores fall solidly into Band 2, meaning that children in both Ohio’s district and charter urban schools, according to the KRA-L, would need targeted instruction. In fact, in none of the cities do children in urban charters fall into a different band than district students. The data is presented with a cautionary reminder not to use it for “high-stakes” decisions and is laid out in “score bands” to reflect a range of measurement error—both of which the author ignores.
- The author states that “instruction offered by a school does not affect children’s scores” because the KRA-L is administered at the beginning of the school year. However, the ODE stipulates that the test be administered by October 1 of each school year (see here). One could argue that even several weeks of excellent instruction in a charter school classroom could impact student test scores. Further, charter schools are not bound by district calendars and may start the school year several weeks earlier than their home districts.
Overall, the report’s suggestion that “charter schools are not educating the state’s poorest and most at-risk children,” along with its call to “scale back charters,” is vague and results in no useful policy recommendations. What might “scaling back” of charters look like? Caps on the creation of more charters? Reductions in funding to existing ones? Heightened accountability for charter sponsors? And will this be applied equally to cities like Dayton where the charter sector regularly outperforms the local school district?
Ohio’s students, especially the neediest among them, deserve high-quality schooling, regardless of whether it is delivered by a district or charter public school. Fordham’s analysis of the 2008-09 Ohio local report card data (see here) illustrates that although proficiency rates for students in urban charter and district schools alike are still inexcusably low, in Dayton and Cleveland students are better served by charters than district schools. And these urban charter students and their families – contrary to what Policy Matters suggests—are demographically similar to their district peers, inasmuch as they are disproportionately poor, low-income, and are tired of being trapped in failing district schools.
A biased call from a union-backed organization to scale back charters is unsurprising in Ohio’s hostile climate, and this report adds nothing fruitful to the conversation about how to improve accountability and quality in public education. Further, and surely this was by design, it threatens to contribute to the widespread misunderstanding about charter schools.