As Ohio lawmakers return to Columbus, a debate is brewing about how to measure the effectiveness of e-schools. At issue is the fact that a large fraction of their students are mobile—for example, our 2012 student mobility report found that less than half of online students stay for more than a couple years. Some e-schools assert that it’s unfair to hold them accountable for raising the achievement of children who spend such a brief time under their supervision.
Are they right? How should we think about accountability for e-schools, or other schools with a highly mobile population? (Our mobility study revealed that urban schools also experience high rates of mobility.) Should state policymakers make accommodations for schools with a more transient student body? Or should they stand firm on accountability, regardless of the challenges of serving a mobile population?
To be sure, these are tough issues, but policymakers can look towards a few guiding principles.
First, all kids count. Every student deserves an excellent education, regardless of whether she’s brand-new to a school or has been enrolled for several years. Think of it this way: when a fourth grade student moves from one school to another, shouldn’t the new school immediately bear the responsibility of educating the child? After all, a student is only a fourth grader once.
Second, transient students—particularly those who have experienced multiple transfers—should be regarded as an “at-risk” group. As mentioned above, they’re more likely to have lower achievement than their more-stable peers, and are more likely to be economically disadvantaged or from minority groups. A non-trivial number of them may have been homeless or in foster care. Accountability policies should ensure that the outcomes of at-risk children, including serially mobile pupils, are reported not overlooked.
Third, accountability measures should be crafted with student outcomes in mind, not necessarily tailored to fit the interests of particular schools. Of course, accountability must be fair to schools, controlling as best it can for socio-demographic contexts—precisely what value-added methods are designed to do. But at the end of the day, the purpose of accountability isn’t to make schools look good or bad; rather, it’s to ensure that students are making the academic progress needed to succeed in college and career.
Given these principles, let’s look at two accountability policies currently in place that relate to student mobility.
Mid-Year Transfer Students
Under present policy, to count on school report cards, students must complete a “full academic year” (FAY) at a particular school. The Ohio Department of Education’s Where Kids Count workbook defines the term in the following way:
The definition of a “Full Academic Year” is: The student is continuously enrolled in the building or district from October count week through May 10th for grades 3-8 standard assessments or March 19th for all other grades and tests in the current year.
This isn’t unreasonable policy—it’s not clear to which school the test score should be attributed when students move halfway through the year. But the FAY policy does result in the exclusion of mid-year transfer students from individual school report cards (it also removes students withdrawn due to truancy). Whether you agree or disagree with the policy—my own view is that the state should consider ways to weight mid-year transfers’ test scores—the overarching point is that the state has already made an accommodation for individual schools that receive mid-year transfers by excluding their test scores.
First-Year Students
In 2014, lawmakers created an alternative value-added (VA) measure that applies only to high mobility schools [ORC 3302.03(B)(1)(h)].[1] The measure includes students who have been tested in a particular school for the two most recent years, thus effectively excluding a school’s first-year pupils from this VA calculation. For example, students who transfer over the summer or those making a structural transfer (e.g., from elementary to middle school) are excluded from the growth computations.
While the alternative measure is not used to determine sanctions, such as automatic closure, it is reported on report cards as an official state accountability measure. By way of contrast, the conventional VA measure, which generally applies to all elementary and middle schools and is used to determine consequences, does include first-year students who meet the FAY definition as described above.
In its first year of use, the alternative measure generated mixed results. Some high-mobility schools perform better on this measure relative to the conventional VA measure, while others do worse. For instance, 27 schools experienced a bump of two or more letter grades, but 20 schools dropped two or more letter grades. More than 40 percent of schools received an identical rating under both VA measures. On the balance, it doesn’t appear that high-mobility schools dramatically benefit when their first-year students are excluded.
Comparison of conventional versus alternative value-added rating, high-mobility schools in Ohio, 2013-14
[[{"fid":"114872","view_mode":"default","fields":{"format":"default"},"type":"media","attributes":{"style":"width: 441px; height: 150px;","class":"media-element file-default"},"link_text":null}]]
Source: Ohio Department of Education, School Report Cards Note: In the 2013-14 school year, 2,573 schools received a conventional VA rating.
But regardless of the results, there are fatal flaws with this accountability measure. First, it contradicts the principle that schools are accountable for the learning of all students by removing their first-year pupils. Why shouldn’t a school contribute a year’s worth of growth for a first-year student who is educated at a school for a full academic year?
Second, the measure clashes with the notion that schools are accountable for the progress of at-risk children. This measure does the exact opposite: It focuses on a school’s advantaged students (stable students) to the exclusion of the disadvantaged group (mobile students). Think of it this way. What if the state created an accountability measure that only included a school’s affluent students while excluding low-income pupils? Imagine the outrage! In fact, a stronger accountability metric would be a value-added subgroup measure that tracks—and holds schools accountable—for the gains of serially mobile students.
As we’ve seen in recent years, excluding students from accountability measures is fraught with peril. And for good reason: In the end, kids count—not necessarily the interests of a particular educational institution. It goes without saying that high-mobility urban and online schools face unique challenges—mobility, especially the mid-year variety, can cause disruption—but the weightier matter is ensuring that every child receives an excellent education. Policies that exclude mobile students don’t meet that standard. As Auditor Dave Yost has said, “Kids count every day, all year long.”
[1] The law defines a “high mobility” school as one with mobility rates greater than 25 percent. In more technical terms, if the percentage of students in a school less than a full academic year is more than 25 percent, it qualifies as a “high mobility” school.