There has been much debate in education policy circles of late about whether it’s appropriate for states and charter authorizers to base school-accountability actions solely upon the performance ratings derived from states’ ESSA accountability frameworks.
We—and our organization—strongly favor the framework-based approach. States should take care, however, to ensure that their frameworks accurately measure the performance of all types of schools. Research recently conducted at Pearson indicates that accountability framework gauges can be inaccurate when applied to schools with high levels of student mobility.
How come? It’s due to the well documented "school switching" phenomenon. When students move to new schools, they often experience a one- to three-year dip in academic achievement simply because of the school change.
That’s why studies of school choice programs tend to show short-term performance declines, then positive effects starting around the third year. It takes a while to overcome the negative academic impact that is often caused by switching to a different school. Data from the Connections Academy virtual schools supported by Pearson confirm this effect: Student performance on state assessments improves with each successive year after the student enrolls.
It's not hard to see why the school-switching effect distorts accountability framework measures for schools with much pupil mobility. If a large proportion of a school’s students are new, and therefore can be expected to have scanty or even negative academic growth in a given year, arising from the simple fact that they’re in their first year in that school, this will distort proficiency rates and have an even greater effect on measures of academic growth.
That’s almost certainly part of the reason that virtual schools receive low ratings on many state frameworks. In any given year in the Connections Academy network of virtual schools, for example, 55 percent of students are in their first year of enrollment. State gauges show mobility at traditional schools to be less than half this level.
The most oft-cited research on virtual school performance is the 2015 CREDO study, which compared the academic growth of students in virtual schools to that of their “matched twins” in traditional brick-and-mortar settings. If the Connections Academy data are representative, more than half the virtual school students in the CREDO sample were likely first-year pupils. It is hardly surprising that CREDO’s results showed far lower growth for the virtual students compared to their “matched twins.”
CREDO acknowledged that its study did not include mobility as a matching criterion because the relevant data were not available. (Their matched twins did have similar levels of prior mobility, which is not the same thing.)
Why do virtual schools have such high student mobility? As the Pearson research cited above shows, such schools are often used by parents and students to address specific short-term problems, such as medical challenges and bullying issues, which can only be met with a home-based, flexible learning environment. Add to this the fact that, as long as the virtual sector is growing, it will continue have a higher percentage of new students.
Since pretty much all of the states’ new ESSA accountability frameworks include measures of academic growth, every school with high student mobility is likely to receive an artificially low rating due to the school switching effect. This will not be limited to virtual schools. Large urban districts often have much pupil mobility, too, so the school switching effect will depress their ESSA framework ratings as well.
As part of Pearson’s long-term commitment to understanding the efficacy of its products and services, we analyzed the performance of Connections Academy schools compared to traditional schools, and we used mobility as one of the matching criteria. We found that, once mobility is factored in appropriately, Connections Academy students perform the same as brick-and-mortar students.
The research is available at Pearson’s Efficacy web site. Yes, our organization has an obvious interest in virtual schools, but this study’s conclusions were peer reviewed by SRI International, and the validity of the data was verified by PriceWaterhouseCooper. We are happy to provide the technical notes to anyone interested.
The implication is not that framework-based accountability is invalid or that framework ratings should be dismissed. Rather, it is that accountability systems must take student mobility into account if they’re to accurately measure school performance and school effectiveness. There are a variety of ways to do this.
For example, frameworks should report proficiency and growth for all students, and also include separate proficiency and growth metrics for students who are in their second year, as well as separate data for students in their third year and beyond. This would help isolate the school’s performance from the effect of mobility, which it cannot control.
Frameworks should also look at high school graduation rates differently. At the very least, they should adopt the Fordham proposal of allocating students to school cohorts based on the percentage of their high school years spent in a given school.
The framework score should be thought of as analogous to an X-ray. If it shows a potential problem, there should be further diagnostics—the equivalent of a CT scan to see if the X-ray might be giving a false signal due to student mobility.
If further analysis shows that the school does indeed have a high level of student mobility, then it is time for the MRI: Other data should be analyzed to confirm or dismiss what the framework showed. What are the growth and proficiency rates of second- and third-year students? What is the annual rate of credit accumulation for high school students? These and other data should be analyzed to answer the question: How do the students perform during the time they are actually enrolled in the school?
If these analyses still show that school is in distress, then accountability actions would be warranted. But if they reveal that low scores are in significant part artifacts of high mobility, then that needs to be considered by regulators. The framework should be the starting point, not the final word. Framework measures should not be used as automatic triggers for accountability actions.
This new research provides solid evidence of the need to factor student mobility into accountability systems. This can be done through careful construction of the data that go into the framework, and through additional analysis after the raw framework score has been determined.
We hope that this analysis advances the understanding of student mobility, and its effect on measures of student and school performance. We also hope that it illustrates why raw framework scores should not be the sole basis for school accountability actions.
The views expressed herein represent the opinions of the author and not necessarily the Thomas B. Fordham Institute.