Unsafe harbor: How to ensure Ohio’s testing transitions don’t sink the state’s voucher students
Coming soon - Quality in Adversity: Lessons from Ohio's best charter schools
Unsafe harbor: How to ensure Ohio’s testing transitions don’t sink the state’s voucher students
Economic Gains for U.S. States from Educational Reform
Coming soon - Quality in Adversity: Lessons from Ohio's best charter schools
Grading Ohio’s school rating system
Public private districts and open enrollment
House Bill 420: Opting out of accountability
Ohio lawmakers recently proposed a bill (HB 420) that would remove students who opt out of standardized tests from the calculation of certain school and district accountability measures. Representative Kristina Roegner (R-Hudson), who introduced the bill, declared that “if [a student is] not going to take the test, in no way should the school be penalized for it.” Students who fail to take state exams (for any reason, not just opting out) count against two of ten school report card measures, the performance index score, and the K–3 literacy measure. Non-participating students receive zeroes, which pulls down the overall score on those components.
On first reading, Roegner’s sentiments seem obvious: Why should schools be held responsible for students who decline even to sit for the exams? Is it the job of schools to convince students (or their parents, the more likely objectors) to show up on exam day? While compulsory schooling laws do require students to attend school, there is nothing especially enforceable about exam day in particular. Ohio does not prohibit opting out. Nor does it explicitly allow it, as some states do (e.g., Pennsylvania allows a religious objection to testing; Utah and California allow it for any reason).
But this bill is short-sighted. By giving schools a free pass for non-participants, HB 420 would open the door for gaming the accountability system in ways that could hurt at-risk students the most. Senate Education Committee Chair Peggy Lehner and my colleague Chad Aldis have already identified this as the bill’s most glaring problem. If non-participants simply don’t count, schools (or teachers) could easily suggest to low-performing students that they take the day off. If and when that happens, the effect of HB 420 would be to let schools off the hook for educating their toughest students. Vital information about true performance would be concealed from parents and voters—and, indeed, from teachers as well. As my colleague Robert Pondiscio has argued, “Those most likely to be negatively affected by the opt-out impulse are low-income children of color, for whom testing has been a catalyst for attention and mostly positive change.”
If this were not reason enough for legislators to shelve the bill, they should consider another likely reality: HB 420 will make Ohio’s opt-out problem even worse (and therefore make school report card data less reliable). In addition to every poorly performing student counseled out and scrubbed from her school’s report card, there’s at least one more student—likely a higher performing one—whose parents question the value of testing. This is undoubtedly one of the reasons that the Ohio Department of Education (ODE) has encouraged schools to caution parents regarding the consequences of opting out, including its adverse impact on school and teacher ratings. This is a real motivator for conscientious, community-minded parents—those who may not see the value of testing for their own child but want to avoid lowering the reputation of their child’s school or teacher.
By removing consequences for opting out, ODE’s lever to encourage test participation loses oomph. The likely result would be more parents keeping their children home on testing day. An increase in the number of opt-outs could eventually cause Ohio to be slapped with a federal “corrective action plan.” The recently passed Everyone Student Succeeds Act (ESSA), which takes effect in the 2017–18 school year, is designed to return education decisions to the states. But it still requires that 95 percent of students (and student subgroups) participate in state assessments. This isn’t a hypothetical situation, either: Thirteen states are already being compelled by the U.S. Department of Education to develop corrective action plans for unsatisfactory test participation rates. (For the 2015–16 and 2016–17 school years, NCLB rules are still in place, which means that Ohio could actually lose Title I money for falling below 95 percent.) Consider, then, that just when Ohio should be charting its own course to improve student achievement, more opt-outs could place the state right back under federal scrutiny.
While we won’t know Ohio’s 2014–15 participation rates until next month, it’s clear that the Department of Education is taking opt-outs seriously, telling states to “ensure that all students will participate in statewide assessments” (emphasis added). Example actions suggested to states include lowering a district’s or school’s rating, counting non-participants as non-proficient, and even withholding state aid from districts with low participation rates. From the sound of it, the department would not consider the wholesale removal of penalties for non-participation as a satisfactory way to address opt-outs.
In sum, any measure that deters students from taking the state assessment is destined to create more grief than solace for Ohio. It will mean turning a blind eye to districts that push aside poor and low-achieving students—essentially sanctioning what Columbus City Schools did a few years ago when they scrubbed away student test scores. It will yield an accountability system that is less accurate and robust. And it will give an unintended boost to the opt-out movement in Ohio, potentially thrusting the Buckeye State into a federal corrective action plan at a time when the state should be going its own way.
Unsafe harbor: How to ensure Ohio’s testing transitions don’t sink the state’s voucher students
When Governor Kasich signed the state budget last June, myriad education changes became law. One of the most talked-about was the extension of a policy known as “safe harbor.” This was instituted to protect students, teachers, and schools from sanctions brought about by the state accountability system during Ohio’s transition to a new and more rigorous state assessment (its third in three years). The provisions are relatively simple: Test scores from 2014–15, 2015–16, and 2016–17 cannot be used in student promotion or course credit decisions, nor can they be used for teacher evaluations or employment decisions. Schools aren’t assigned an overall grade during the safe harbor, and report cards can’t be considered when determining “sanctions or penalties” for schools.
One of the accountability measures impacted by safe harbor is the EdChoice Scholarship program. EdChoice, Ohio’s largest voucher program, affords students otherwise stuck in the state’s lowest-performing schools the opportunity to attend private schools at public expense.[1] Safe harbor, however, mandates that schools on the EdChoice eligibility list as of 2014–15 remain on the list (even if they improve) and schools not on the list stay off (even if their performance declines). We immediately criticized this change, noting how frozen school eligibility wrongfully denies students the educational choices they deserve. We urged policymakers to clarify eligibility to ensure that students attending Ohio’s lowest-performing schools retained the opportunity to attend different ones.
Last month, the Ohio Department of Education (ODE) released updated guidance on safe harbor. Buried in the clarifications was an attempt to clarify eligibility—but not in a good way: “Safe harbor means that no new public school buildings will be included on the program’s eligibility list until the 2019–2020 school year. However, public school buildings could be removed from the eligibility list as indicated in ORC 3310:03(B)(1). Results from the tests given in school years 2014–15, 2015–16, and 2016–17 will not be used to identify new public schools for eligibility.”
Translation? While school ratings on state report cards don’t get more schools added to the voucher eligibility list—no matter how dismal their sagging performance—schools whose performance improves can apparently be removed from the list. The mechanics of how this will be decided is murky, but the fact that ODE says it can happen is a problem; the original eligibility freeze had its flaws, but this new guideline is both bad for students and bad public policy. Safe harbor is premised on the legitimate theory that schools and educators need time to adapt to new state tests. But when implemented ODE’s way, it will block students stuck in a consistently low-performing school that wasn’t on the list in 2013–14 from obtaining a voucher until at least 2019–20. That’s six more years, or half of a student’s K-12 lifetime. Yet this new guidance uses the same “unfamiliar” assessment results as evidence that schools have improved—and thus can be removed from the eligibility list.
The end result is that adults involved in Ohio’s worst schools are held harmless, but the kids in those schools are left in an environment that doesn’t help them learn. This really needs to be fixed ASAP. Here are three possible solutions that policy makers could pursue:
Go back to the original safe harbor agreement
Despite its warts, the original agreement regarding vouchers and safe harbor—that the EdChoice list be frozen—is better for kids than what’s in ODE’s new guidelines. It would still prevent additional students from accessing vouchers, unfortunately. But at least it would not end access for currently eligible students based upon test results that critics contend aren’t reliable enough to show that a school is getting worse—but are just dandy for showing that it’s somehow gotten better.
Remove voucher eligibility from safe harbor provisions
A better option would be to remove voucher eligibility altogether from safe harbor provisions. This would ensure that students stuck in failing schools don’t lose out on educational choices just because of state-level policy changes. Critics will argue that exempting voucher eligibility from safe harbor provisions would defeat the intent of safe harbor, which is to protect schools and districts from sanctions and penalties during a time of transition. The problem with that argument is that vouchers aren’t primarily a sanction on schools. The EdChoice Scholarship is an opportunity extended to students when the local school lets them down. Applying safe harbor to EdChoice puts the interest of schools and districts above the interest of students—which is completely upside-down and wrong-headed.
Change voucher eligibility altogether
In light of the complexity that has always attended Ohio’s voucher programs for students in failing schools, maybe it’s time to move toward changing EdChoice into an income-based eligibility program. All but two states with vouchers based their programs on family income or children’s special needs rather than the achievement of the local public school. Income-based vouchers help the students most in need of options—not only those who are stuck in weak schools, but also those struggling in any school that doesn’t meet their needs. Poor families can seldom pay private school tuition or move to different neighborhoods. By ending the “failing schools” approach to voucher eligibility and transitioning to an income-based version instead, Ohio could extend educational opportunity to thousands more students, empower low-income families to make the best decisions for their children, remove the overtly negative “failing schools” nomenclature from regular use, and create a program that’s simpler to manage and not subject to constant fluctuations in academic standards, state tests, and state accountability systems.
It sounds like a major shift, but it would require only minor changes. The state could maintain the EdChoice Scholarship cap at sixty thousand seats; there’s plenty of capacity. For those fearful of a mass exodus from public schools to private schools, history has already shown that scenario to be highly unlikely. The existing income-based EdChoice program is already statewide in three grades and growing every year, and it hasn’t resulted in a massive outflow of students from public schools. Most states already focus on income-based vouchers. Maybe it’s time for Ohio to do the same.
***
While there are no simple answers, doing nothing is a heinous option. ODE’s current interpretation of the safe harbor language vis-à-vis EdChoice isn’t good for kids, and it ill serves a worthy program that has changed a lot of lives. Whether policy makers opt to freeze eligibility completely, divorce it from safe harbor, or transition to income-based vouchers exclusively, they must choose to do something. The right of thousands of students to exercise a choice that affluent families take for granted depends on it.
[1] For a public school to be added to the EdChoice eligibility list, a school must have poor state assessment results in two of the past three years. EdChoice also permits low-income students in grades K–3 (and an additional grade level each year) to access a voucher regardless of their assigned school. For more details on eligibility, see section 3310.03 of the Ohio Revised Code.
Economic Gains for U.S. States from Educational Reform
It’s often argued that improving education will improve the nation’s economy. A new study from the National Bureau of Economic Research not only affirms this argument but also demonstrates just how big the economic effects of school improvement could be.
From the start, it’s clear that this paper differs from its predecessors. Previous studies examined human capital and its effect on states’ economic development by measuring school attainment (high school graduation). This one points out that attainment is an imperfect yardstick—it incorrectly assumes that increased levels of schooling automatically suggest increased levels of knowledge and skills. A better way to determine the relationship between education and economic value is to measure a different outcome: achievement. Since “no direct measures of cognitive skills for the labor force” exist, the authors craft their own. They start by constructing an average test score for each state using NAEP, then adjust the test scores for different types of migration (interstate and international among them) in order to offset the high mobility of the American population.
Hanushek, who has published multiple studies linking economic activity with enhanced educational output, offers several scenarios in his latest report. If every state improved to the level of Minnesota, the top-performing state from the past two decades, the U.S. economy would grow by $76 trillion by 2095 (the end of the projection period). Current low-performers would see gains of more than seven times their current GDP. Needless to say this level of growth is incredibly ambitious—and perhaps not as feasible as other possibilities.
A second scenario calculates that bringing all of the lowest-performing students up to the basic level of achievement (as defined by NAEP) “would have a noticeable impact on the distribution of earnings, and ultimately income, in the U.S.” That impact would total $32 trillion in economic growth by 2095. The drawback of this scenario, though, is that it assumes no “spillovers in quality” (score improvements) for students already scoring at basic or above.
The study also examines the effect of states improving to match the best state in their respective region, the improvement of all states’ scores by one-quarter of a standard deviation, and the improvement of all states to the levels of neighboring Canada or high-achieving Finland. Overall, the projected economic benefits of educational improvement are stunning. The authors predict that these estimated gains would, on average, pay for all K–12 education in states and yield extra returns. That sounds like one more reason why improving student achievement is so important (as if we needed another one).
SOURCE: Eric A. Hanushek, Jens Ruhose, and Ludger Woessmann, “Economic Gains for U.S. States from Educational Reform,” National Bureau of Economic Research (December 2015).
Coming soon - Quality in Adversity: Lessons from Ohio's best charter schools
Fordham Ohio’s latest report will be released on Wednesday, January 27, and will detail the results of a survey of leaders of some of the state’s highest-performing charter schools.
What do those leaders think of Ohio’s overall support for charter schools, closing failing charters, and criticism of the sector? These questions and more will be answered in this important new report.
Quality in Adversity: Lessons from Ohio’s best charter schools will be available Wednesday, January 27, by clicking here.
Grading Ohio’s school rating system
Late in 2015, Congress passed a new federal education law—the Every Student Succeeds Act (ESSA)—which replaces the outdated No Child Left Behind Act of 2001 (NCLB). The new legislation turns over considerably greater authority to states, which will now have much more flexibility in the design and implementation of accountability systems. At last, good riddance to NCLB’s alphabet soup of policies like “adequate yearly progress” (AYP) and “highly qualified teachers” (HQT)—and yes, the absurd “100 percent proficient by 2014” mandate. Adios, too, to “waivers” that added new restrictions!
But now the question is whether states can do any better. As Ohio legislators contemplate a redesign of school accountability for the Buckeye State, it would first be useful to review our current system. This can help us better understand which elements should be kept and built upon, modified, or scrapped—and which areas warrant greater attention if policy makers are going to improve schools. Since Ohio has an A–F school rating system, it seems fitting to rate the present system’s various elements on an A–F scale. Some will disagree with my ratings—after all, report cards are something of an art—so send along your thoughts or post a comment.
NB: In this review, I primarily address school ratings (for more background on school report cards, see here or here). In a future piece, I’ll look at the issues around interventions for “low-performing” schools, another realm where ESSA gives states more policymaking discretion. (I also don’t review in detail issues of standardized testing, instead assuming a statewide exam that’s consistently administered and enjoys full participation. See here for my general views on testing; for my colleague’s wholly persuasive opinion on the “opt-out” situation, see here.)
A
A–F grading: Beginning 2012–13, Ohio has reported school performance (and subcomponents thereof) using a reader-friendly A–F scale. With respect to public transparency around results, this policy has been a major step forward. (Ohio’s previous ratings system used murky descriptors like “effective” or “continuous improvement.”) The Buckeye State should stay the course with A–F grading.
Student growth measures (a.k.a. “value added”): Another significant advance made by Ohio is the implementation of a value-added measure for gauging student growth. This measure reveals a school’s impact apart from pupil demographics and other non-school factors, which are especially important for schools that educate students arriving behind grade level. Wisely, the Buckeye State has made value added a key component of school report cards, and it is now used to guide state interventions (e.g., default charter closure and public schools whose students become eligible for vouchers). The drawback is that sophisticated data analysis is used to determine these results, thus limiting transparency. As value added becomes a more prominent feature in school accountability, it’ll be critical that the public gain a greater understanding of what it means and how it’s calculated.
B
Subgroup value added: In 2012–13, Ohio implemented value-added grades for three student subgroups: low-achievers (lowest 20 percent statewide), students with disabilities, and gifted pupils. This represents a big improvement: The public can now gauge how schools are educating students who might be ignored when the focus is on average gains. I have two reasons for withholding an A grade here: First, it only applies to three subgroups, but many more could usefully be included. Such groups might include low-income students, English language learners—perhaps meeting ESSA’s new requirements for ELL accountability—and race/ethnic subgroups. Second, the subgroup results aren’t used to determine consequences. Moving forward, we might consider using subgroup value added for consequential accountability. For example, a school might be obligated to offer additional education options to its high-ability students if it fails on gifted value added for three consecutive years.
No overall rating: Schools haven’t seen an “overall” rating since 2011–12; instead, they’ve been graded on a handful of report card categories. The pause has actually worked out pretty well, as it has forced us to acknowledge that different indicators mean different things and cannot be easily lumped together. In its way, this has promoted more equal consideration of growth and absolute achievement. The tradeoff, of course, for not having a single aggregate is reduced transparency and public understanding. What to do? One option is to solidify the current policy of no overall grades (they are slated to return in 2017–18, when “safe harbor” ends). In my view, this is an appealing option if allowed under ESSA. (The law implies an overall rating but doesn’t explicitly mandate it.) However, if policy makers insist on overall ratings—or if they are required—the rating formula shouldn’t universally punish low-income schools for lagging achievement but also grant them substantial credit when students make large gains. One possibility, raised by my colleague Mike Petrilli, is to use a sliding scale when determining the weight on achievement and growth for different types of schools.
C
Science and social studies: As my Fordham colleagues have repeatedly insisted, science and social studies are critical elements of good schooling with a balanced curriculum. The problem, however, is that accountability systems have been designed almost exclusively around math and English language arts, leading schools to focus narrowly on those subjects. One remedy might be to strike a better balance in the state’s accountability system across the four “core” content areas. If feasible, one potential avenue for improvement might be a value-added measure for science and social studies. (Though not graded components of the school rating system, Ohio reports separate value-added results for math and reading.) One possibility might include the assignment of school grades in math, reading, social studies, and science to signal the equal importance of these subjects.
High school accountability: The present high school accountability system relies mainly on proficiency and graduation rates, ACT/SAT scores, and AP data. That fine as far as it goes—they are important metrics, especially the remediation-free results based on ACT or SAT scores. But let me poke two holes in these measures. For one thing, I agree with Fordham’s Robert Pondiscio, who recently dubbed graduation rates “phony” statistics. Getting students to the high school finish line, sometimes through questionable means like “credit recovery,” should no longer be an outsized focus when evaluating high school quality. Whether young people are “college- and career-ready” when they graduate from high school must take priority. Additionally, several of these measures are closely correlated with students’ socioeconomic characteristics. To provide a clearer view of school performance, policy makers must follow through and implement a value-added measure for the high school grades (this measure is expected in 2017–18, if not sooner).
Student achievement measures: A word must be said about Ohio’s longstanding achievement measures for schools, namely “performance index” and “indicators met.” They’ve got some merit as report card measures. We absolutely need evidence of how students in a school are performing in addition to school performance—the purpose of value added. As a measure that gives schools more credit when students achieve at higher levels (akin to a weighted GPA), the performance index is the superior of the two achievement-based measures. (The “indicators met” rating is a pass/fail measure against a predetermined statewide proficiency rate.) In my view, state authorities should pick just one of the two measures for school rating purposes, preferably the performance index. As highly correlated measures, the dual ratings are usually a double whammy for lower-income schools; for high-income schools, they are an unearned bonus. While they’re at it, policy makers should make sure the raw proficiency data by school and subgroup remain available for public review and analysis, as they now are.
D
Annual measurable objectives (AMOs): Never heard of AMOs? Good, you’re probably better for it. Since 2012–13, AMOs have been used as a replacement for the AYP subgroup requirements under the ESEA waiver program. In short, it’s methodological gobbledygook that tries to gauge how well schools are educating student subgroups. It is also premised on proficiency and graduation rates—which, as has been stated about a zillion times, are contaminated by their correlation with socioeconomic characteristics. That’s doubly problematic because the AMO subgroups include racial minorities and low-income students. The measure also yields odd results. Some of Ohio’s very best high-poverty schools are dinged for failing AMOs—presumably guilty of widening the achievement gap—even as value-added measures reveal their students to be making big learning gains. Now that Ohio is freed from NCLB and former Secretary Duncan’s waivers, we must find a better way to gauge subgroup progress (see above, subgroup value added). First, scrap AMOs.
K–3 measures: The Kasich administration and the legislature have rightly emphasized early childhood literacy. One piece in this initiative has been the implementation of the K–3 literacy measure—a step in the right direction. But it’s also a peculiar measure, only accounting for students who are “not on track” (as deemed by schools) on diagnostic tests in reading. Oddly enough, 157 out of 609 districts in Ohio didn’t receive a K–3 literacy rating in 2014–15. (Are we to infer that one-quarter of districts don’t have any struggling young readers?) To be sure, the implementation of K–3 literacy measure is in its infancy. But if Ohio is to get serious about early literacy for all students, accountability will need to be improved in the early elementary grades. Should the K–3 literacy measure be redesigned? Or could the state begin work on a measure that links kindergarten readiness with third-grade reading results?
F
Low proficiency standards: Unfortunately, I must end on a sour note regarding something utterly fundamental: Ohio’s woeful cut score (in more technical speak, its “performance standard”), set for the purposes of gauging student proficiency. In 2014–15, state authorities set an anemic standard—apparently one of the lowest in the nation—and one that doesn’t begin to align with Ohio’s NAEP results. From the looks of it, that standard will remain low as Ohio transitions to ODE/AIR-designed exams this coming spring. We’ve said it before, and we’ll say it again: Buckeye policy makers cannot continue to mislead parents and taxpayers about how many students are truly on track for success in college or career. Let’s raise the bar for student proficiency.
There you have it: the good, the bad, and the ugly of Ohio’s present school accountability system. With the new powers that the state possesses under ESSA, it’ll have more room to maneuver in the accountability space. We should use this exceptionally rare and long-overdue opportunity to make things better, certainly not to undercut robust and objective outcome measures. A strong accountability system is critical as Ohio seeks to improve its schools and lift achievement for all.
Public private districts and open enrollment
A few years ago, a couple of my Fordham colleagues coined the phrase “public private” schools to describe schools that educate virtually no low-income students. In the report, they suggested the following notion: Though “public” in name, high-wealth schools are, in practice, pretty much equivalent to private ones. Families wanting to enroll their children in such schools effectively pay “tuition” through higher real-estate taxes and/or paying a fortune on housing. Low-income families are functionally excluded from sending their children to these schools.
But when an affluent district enacts an open enrollment policy, students outside its jurisdiction can attend. This suggests that they’re acting more in their public than private nature. Since 1989, Ohio has permitted such inter-district open enrollment, and today, most (though not all) districts participate. For the 2015–16 year, 81 percent of districts allowed some degree of open enrollment.[1]
So what about Ohio’s public private school districts? Do any of them open their doors for all comers? Or are they adhering more closely to their “private” identity by denying non-resident students the opportunity to enroll? Let’s take a look at the data.
When my colleagues examined public private schools in 2010, they identified them based on whether they enrolled less than 5 percent low-income students (i.e., eligible for free-and-reduced price lunch—FRPL). Applying that threshold to Ohio districts yields just five with less than 5 percent FRPL. But let’s set the bar a little bit higher—at, say, 10 percent FRPL. This procedure yields twenty-five school districts out of 609 in Ohio (or 6 percent of districts; the fraction of schools nationally identified as public private was 3 percent in the 2010 analysis).
Perhaps another way of identifying a public private district could be based on how it raises revenue. If a district generates a large fraction of its revenue locally, one could almost view its revenue structure as dependent on “private” contributions. For example, the benefits of a local school tax accrue more directly to the taxpayer, especially compared to a state tax which funds a variety of public programs. Setting the threshold at 80 percent or more local revenue[2] yields thirteen public private school districts.
Table 1: Public private school districts in Ohio, 2013–14
[[{"fid":"115448","view_mode":"default","fields":{"format":"default"},"type":"media","link_text":null,"attributes":{"style":"height: 414px; width: 400px;","class":"media-element file-default"}}]]
Source: Ohio Department of Education, Cupp Report (FY14) and Open Enrollment Listing (2015–16)
As one observes from Table 1, we identify a total of thirty-two public private districts in Ohio, of which six qualify under both thresholds. Do any of them allow inter-district open enrollment? The answer is yes, but not many. Kudos to the nine that do allow open enrollment: Minster, New Bremen, Chagrin Falls, Fairport Harbor, Marion, Danbury, Miller-New Cleveland, Russia, and Mason school districts. While it may be true that these districts don’t actually enroll many low-income students outside of their jurisdiction, at least they’re opening an opportunity for disadvantaged children in their area to attend. But shame on the public private districts that refuse to open their doors to students outside of their district. They should change their open-enrollment policy, especially if there are available seats. Or maybe it’s time to call these districts what they are: Private.
[1] The 2010 Fordham report conducted its analysis at the school level; however, since open enrollment is a district-level policy, I focus on the district level.
[2] Local revenue includes local tax and non-tax revenue. One district is excluded: Marysville Exempted Village, which reportedly generates 100 percent locally. That figure, however, almost certainly results from a data-reporting issue.