In Enrollment and Achievement in Ohio’s Virtual Charter Schools, the Fordham Institute takes a robust, nuanced look at Ohio’s substantial number of e-schools (more commonly known as full-time virtual charter schools). The report paints a troubling picture of these schools’ performance and offers valuable policy recommendations for driving K–12 online learning toward a better future. But readers should note that it also suffers from four significant limitations that are shared by most other studies of virtual charters. This is not the analysts’ fault; it’s intrinsic to the data that are currently available to measure the outcomes of e-school students.
First, the study does not control for course-taking patterns. It usefully reports that students in Ohio’s full-time virtual schools are “more likely to enroll in basic and remedial math courses” than students in brick-and-mortar schools. It would seem plausible—though the authors don’t mention it—that this might have an adverse impact on pupil achievement as gauged by state tests. Determining why so many more students take lower-level math courses in the virtual environment is an important next step to build on this research.
Second, the state tests given in and before 2013—those whose results were used in the Fordham report—exhibit a serious deficiency: They tested what students should know in a given grade. Computer-adaptive tests that assess above- and below-grade-level material more accurately depict what a student does know and can do, but they aren’t used enough (nor do state reporting systems often capture the granular detail they can provide). To grasp the difference, imagine a pupil who enters the fifth grade but has only mastered what the state says a second grader should know in math. His school can choose to personalize his curriculum by focusing primarily on third-grade math competencies, but doing so means that he will likely fail to attain proficiency on the end-of-year fifth-grade assessment.
Alternatively, a school could decide to simply aim for the test by focusing primarily on the fifth-grade knowledge and skills that a student will need to pass it. This could rob him of a solid foundation in mathematics and pose problems in the future, even if it boosts his odds of passing the fifth-grade test.
Online schools are ideally equipped to personalize learning according to each student’s distinct needs. If an eleven-year-old needs to spend time mastering second-grade math, she can. This is much more difficult in traditional brick-and-mortar schools with whole-class teaching models. But if Ohio virtual schools are indeed personalizing students’ learning, the grade-level tests used to measure their growth won’t be accurate. It’s time for a more robust system of assessments that measures individual student growth—whether at, above, or below grade level.
Third, the report does not investigate whether (as seems likely) certain e-schools perform better than others. Ohio Connections Academy, for example, roughly meets or exceeds Ohio’s average statewide performance on 80 percent of the state assessments. (Its big hole is in math, which has caused the school to redo most of its curriculum for that subject). On these blunt assessments, Ohio Connections Academy outperforms some of the largest e-schools in Ohio (by a significant margin in some cases).
This suggests two recommendations to add to those proffered by Fordham. First, authorizers of e-schools must be steadfast in shutting down the poorly performing ones, just as authorizers of brick-and-mortar charter schools must be vigilant in maintaining their quality. Second, the state department of education should study what practices are enabling providers like Ohio Connections Academy to achieve superior results—and offer its findings as guidance to school operators and authorizers.
Fourth, although controlling for the demographic and prior achievement variables that are measured at the state level does provide us a window into schools’ average quality, it doesn’t tell us why a student enrolled in a full-time virtual school in the first place. It consequently does not allow researchers, including the authors of the Fordham report, to actually compare like situations and results. For example, a significant percentage of students enroll in e-schools because of a medical issue or bullying, and may thus be using their schools to help escape an unworkable situation—with little regard to what that decision means for the future. In such cases, academic considerations are secondary at best. Understanding this dynamic is critical to making valid comparisons and defining student success.
***
Despite these limitations, the Fordham report does some things really well. It controls for demographics and prior achievement (including whether a student repeated a grade) more than any other study of virtual schools to date. By looking only at students enrolled in e-schools for two consecutive testing periods, it potentially undercuts one popular counterargument from those at the full-time virtual schools: that students enrolled for a short time may not do well, whereas those enrolled for multiple years perform much better. Its policy recommendations are solid and recognize some of the nuances in the full-time virtual school landscape. Instead of calling for a ban on such schooling—which would rob myriad students of education options that have been critical to their lives and their academic success—it presents constructive ideas for improving e-schools, and online learning more generally.
As report author June Ahn says, “Though the age of online learning has dawned…there is much room for improvement as far as online schooling goes.” That would seem to apply not only to the full-time virtual schools themselves, but to policy makers, school authorizers, researchers, and psychometricians as well.
Michael B. Horn is a co-founder of the Clayton Christensen Institute.