Why vouchers are struggling to boost student achievement
By David Griffith
By David Griffith
Advocates of school choice breathed a sigh of relief last month when a pair of new studies showed that voucher programs in Indiana and Louisiana were performing better than prior research had suggested. But they shouldn’t get too comfortable.
The news that most students recovered the ground they lost when they first enrolled in these programs after three or four years is obviously welcome. But it is unlikely to satisfy critics, who will rightly note that students who returned to the public schools after a year or two lost significant ground, and that so far no statewide voucher program has shown significant benefits for the average participant. To the contrary, the last four voucher programs to be rigorously evaluated—including those in Ohio and Washington, D.C., as well as Indiana and Louisiana—have all shown negative or decidedly mixed effects.
Some informed observers have tried to explain vouchers’ struggles by appealing to the improved performance of public schools or the dilution of so-called “peer effects,” while others have pointed to methodological problems with the studies. Perhaps the disappointing performance of Ohio’s voucher program is due to the tests, which are high stakes for public schools but not for private schools. Perhaps the heavy-handed regulations associated with Louisiana’s program are discouraging high-performing schools from participating. Perhaps voucher performance in D.C., where the results reflect just one year of participation, will improve over time, as it has in Indiana and Ohio.
All of these explanations—or excuses, depending on your point of view—have some merit. But Occam’s Razor suggests that the most likely reason for vouchers’ mediocre performance is the simplest: Some of the private schools in these programs just aren’t very good, at least when it comes to boosting student achievement. And the free market alone is unlikely to address that challenge.
On this point, the charter school experience is instructive—or at least it ought to be. Yes, there are a few states where charters have achieved strong results despite a weak framework for intervening in low performing schools, or a dearth of quality authorizers, or limited parental supports. There is an exception to every rule. But what about Nevada and Ohio, where lax oversight of charters has been a disaster for kids? Is there a state in the union with strong oversight, robust market supports, and a low-performing charter sector? If not, why does the burden of proof fall on those seeking to avoid another Nevada?
Overall, the charter experience has actually been a stunning validation of basic market theory, which is why it’s so frustrating to see that theory ignored when it comes to vouchers. To see what I mean, consider Table 1, wherein I’ve attempted to summarize how the market failures one would expect to find have manifested themselves in the education market (as well as some potential solutions).
Table 1. Failures of the education market
Market Failure | Examples | Solutions |
Monopoly power | Low-performing rural charters. | - Set a high bar for charter approval in rural areas - Focus new growth initiatives on major cities |
Barriers to entry | Lack of facilities; inherent challenges of starting a school | - Better facilities funding - Limit the regulatory burden |
Missing markets | - Eliminate arbitrary enrollment caps - Incentivize coverage (e.g., via facilities access) | |
Incomplete markets | Underfunded charter schools and voucher programs | - Provide equitable funding |
Lack of property rights | Parents don’t “own” their kids’ education dollars | |
Factor immobility | Lack of choice-friendly transportation for kids | - Give older kids a student transit pass |
Transaction costs | Applying to multiple schools is a hassle | - Establish a common application |
Information failure | Parents lack information about school quality | - Provide parents with clear information by publicly rating schools - Close bad schools or cut off their supply of public dollars |
Irrational consumers | Parents unable or unwilling to evaluate school quality | |
Information asymmetry | ||
Negative externalities | Potentially adverse sorting by race, class, etc. | |
Inequality | Rich parents better equipped to navigate the system | |
Government failure | Over- or under-regulation | - Altruistic policymakers - Informed citizenry |
Four things should be obvious from this table.
First, basic economic theory explains most of the problems facing the school choice movement, so no economically literate person should expect the education market to self-actualize or self-regulate.
Second, these problems have fairly straightforward solutions (with the possible exception of transportation), so there’s no reason to conclude that the market can’t work just because it doesn’t always.
Third, it’s not just about regulation. But many of the solutions—from a choice-friendly transportation system, to a common application, to effective consumer protection—are beyond the capacity of civil society to implement, so government has a role to play in ensuring the market functions properly.
Finally, we’re much closer to solving these problems for charters than we are for vouchers. For example, no citywide common application includes private schools, so it’s easy to see how parents might fail to maximize their child’s chances of enrolling in a good school. Similarly, most parents of voucher recipients still operate with extremely limited information, even in states where participating schools are graded like public schools, because these grades are almost impossible to find. And, of course, vouchers are even more underfunded than charters in most states, which necessarily limits their capacity.
Most importantly, many voucher programs—and all tax credit scholarship programs, which enroll even more students nationally—still lack the sort of accountability mechanisms that have led to sustained improvements in the charter sector over time. In particular, most programs still lack a mechanism for cutting off funding for consistently low-performing schools, and those that have such mechanisms, like Indiana and Louisiana, are only beginning to separate the wheat from the chaff.
If you think about it, the case for results-based accountability is actually even stronger for voucher programs than it is for charter schools. After all, at least in theory, charters must convince authorizers that they know what they’re doing before they can open, whereas there are almost no barriers to entry for existing private schools seeking public funds—especially for programs that lack testing requirements. Furthermore, the academic and financial costs of results-based accountability are much lower for vouchers than they are for charters because participating schools don’t have to close and their students don’t have to switch schools. These same factors may also make voucher accountability more politically feasible than charter accountability (as well as more practical in areas that are too rural for closure) and thus more effective.
Much of the resistance to stronger accountability for private school choice reflects an aversion to testing, which is perfectly understandable but doesn’t answer the question of what to do when a school isn’t serving kids well. I don’t have much to add to that debate, so I will simply observe that, in my view, research has demonstrated a link between test scores and long-term outcomes, and that it remains impossible to gain admission to most top-flight universities without excelling on some sort of test.
In the long run, there’s no reason that vouchers can't achieve what charters have—especially in more urban areas. But as Keynes famously observed, “in the long run we’re all dead.” According to the multi-state research performed by CREDO, which is the closest thing we have to a comprehensive report card for the charter sector, it took twenty-one years for charters to match the performance of traditional public schools. If we don’t get serious about quality control, it could easily be another decade before statewide voucher programs manage to do the same.
Thanks to the most recent batch of voucher studies, some members of the school choice community are beginning to admit that we have a problem. But there is work to be done, and it won’t happen unless and until we can agree on what it entails. Too many people who’ve seen the evolution of the charter movement—and should therefore know better—are convinced that “letting a thousand flowers bloom” will work better for vouchers than it did for charters. But it won't because the same market dynamics are at work in both sectors.
To be clear, this is not a call to regulate on inputs. But it is a call for intellectual honesty and consistency from those who see choice primarily as a means to an end and have used the gains students are making at charter schools to justify their expansion. Nine times out of ten, a regulated market is preferable to both an unregulated market and a government monopoly, which is why it’s high time for the school choice movement to resolve its cognitive dissonance by placing “public” and “private” choice in the same intellectual framework.
Only the mechanism is different.
Once upon a time there was a new Secretary of Education who was charged with providing states flexibility to meet their education goals through the Every Student Succeeds Act. Upon review of the first plan, she declared “these long-term goals are not ambitious enough,” so she read the next plan. “These goals may be too ambitious,” she stated about the second. Then she read the third plan. “These goals make sense,” she happily declared and recommended approval without haste.
If only this story was merely a fairy tale. Unfortunately, our reality is almost strange as fiction. As Michael Petrilli eloquently argued in his recent commentary, Secretary DeVos and her team stumbled out of the gate with respect to their initial review of the Delaware, Nevada, and New Mexico state plans. Each received feedback on their submitted ESSA plans on areas to improve—as to be expected—but the responses to state’s proposed long-term goals were anything but expected.
The Department suggested that Delaware’s goal to decrease by 50 percent the number of non-proficient students in each subgroup within twelve years did not meet the statute’s threshold for “ambitious.” Peer reviewers found Nevada’s goals to be “very ambitious,” as it sought to be the “fastest growing state” in the nation in student achievement. Among their goals: an annual 5 percent reduction in non-proficient students over six years, or half the expected improvement as Delaware in half the time. New Mexico also received praise for the ambition of its goals. The state plans to improve from 27.8 percent to 64.9 percent proficient over six years—or stated differently, to reduce by half the number of non-proficient students in in that timeline.
Regardless of whether you agree or disagree with the reviewers’ evaluation, the problem is that there is no internal consistency about the standard states must meet. And that leaves states—both Round One states and those yet to submit—in the lurch.
The ESSA statute reads that states shall “establish ambitious State-designed long-term goals.” One may presume that to mean states have broad authority in that respect, but without regulations there is clearly space for interpretation. According to the Department’s FAQ, “In cases where the statute does not define a specific term, a State has significant discretion to determine how it will define that term. In accordance with the Secretary’s responsibility to review State plans, the Secretary is obligated to make a determination as to whether a State’s proposed definition, on its face, is reasonable.”
So, if there is no clear definition in the law and the Secretary will make decisions about the “reasonable” level of ambition on a case-by-case basis, then what are states to do? Here are four steps that states should take to meet the “ambition threshold,” based on goal setting research and the initial round of feedback.
Provide data on past performance. Having reviewed all of the Round One ESSA plans and participated in the Bellwether ESSA plan review process, I can attest to the difficulty of judging whether the proposed goals are ambitious. Too many states included long-term goals without any context for prior performance. In some states, that was because data from newly implemented assessments were not yet available, but that was not universal. The states with the clearest plans were those that provided a rationale—using data—to explain why their new expectations were ambitious. For instance, New Mexico used its recent rate of growth in graduation rates and English language proficiency to establish the state’s goal, and explain how it intends to increase the rate of improvement.
Benchmark goals to reality. Debate continues about the value of aspirational (or “utopian”) goals, but the fact of the matter is that the best goal research available instructs us to set goals that are attainable and realistic. Otherwise, everyone stops paying attention (insert standard NCLB reference here). This approach projects an ambitious long-term goal based on the reality of year-to-year school progress. The benchmarking can occur in two different ways. First, the state can review its prior-year data to identify the amount of improvement demonstrated annually by a school at a certain percentile of improvement (e.g., the school that was in the 60th percentile of improvement for the last three years demonstrated 2.5 percentage points of progress on the state’s assessment annually). It would be up to the state determine what percentile of school is an appropriate benchmark. Second, the state could benchmark progress to a peer state. An example of this is for a state to set its four-year graduation rate goal at a rate similar to the state with the greatest growth in graduation rates over the past three years.
Align K–12 and postsecondary education goals. As Education Strategy Group has written previously, focusing on high school graduation is not enough—states need to set their sights on helping all students seamlessly transition to postsecondary education and training. States should back map from their postsecondary attainment goals, the number of students needed to seamlessly enroll in credit-bearing coursework, and set their K–12 goals accordingly. Nearly forty states have attainment goals, yet very few have vertically aligned their K–12 system goals with the postsecondary attainment goals. Four states (Illinois, New Mexico, Oregon, and Tennessee) in Round One took advantage of the ESSA opportunity to ground their ambitions in a broader frame for student success. Greater detail about the approach can be found in “Aligning K-12 and Higher Education Goals to Support Success for All Students.”
Keep timelines relatively short. To inspire action, goals need to be timely and achievable. A handful of states proposed goals for thirteen years in the future, aligned to the length of time students traditionally spend in the K–12 system. While this rationale is reasonable, goal setting research suggests that a timeline of this length will not inspire immediate action, and peer reviewers seem to agree. Given this, a timeline of six to ten years for the long-term goal appears to be an appropriate approach. Most states with higher education attainment goals have established 2025 as the end point, thus it would be reasonable for the states’ K–12 system to set their long-term goals on that timeline as well to bring about clear alignment.
Fortunately, the Department and states—equally—have time to address this issue. The Secretary and her team are under tremendous pressure to deliver feedback to states on a quick timeline, and some initial stumbles are to be expected as they navigate this new process. Based on the recently released second batch of feedback letters, it appears the Department has learned its lesson and is working to correct this issue. Gone are questions of ambition; in its place are questions for additional detail. This is the right strategy. The Department should defer to state goals, so long as a state can defend them using data.
And states have work to do as well. When resubmitting plans, or submitting new plans in Round Two, every state should provide more detail and data to explain their rational in choosing ambitious goals. Not only will this help in the review process, but it will also provide valuable information to the state’s stakeholders.
Like Goldilocks, this story can end happily ever after.
Ryan Reyna is Senior Associate at Education Strategy Group, where he leads support for states on K-12 accountability. Previously, he served as Director of Accountability and Data Management at the Delaware Department of Education.
The views expressed herein represent the opinions of the author and not necessarily the Thomas B. Fordham Institute.
On this week's podcast, special guest Chad Aldeman, a principal at Bellwether Education Partners, joins Mike Petrilli and Alyssa Schwenk to discuss his organization’s recent review of state ESSA plans. During the Research Minute, Amber Northern examines the progress of the high school sophomore class of 2002.
Xianglei Chen et al.,“Early Millennials: The Sophomore Class of 2002 a Decade Later,” National Center for Education Statistics, Institute of Education Sciences (June 2017).
A new working paper presents findings from an evaluation of the Indiana voucher program—a hot topic given the Trump Administration’s embrace of private school choice. Mark Berends (University of Notre Dame) and Joe Waddington (University of Kentucky) examine the impacts of the voucher program (a.k.a. Indiana Choice Scholarship Program) on Hoosier State students in upper elementary and middle school (mostly grades 5–8) who used a voucher to transfer to a private school during the 2011–12 through 2014–15 years, which were the first four years of the program.
Indiana’s program is now open to both low- and middle-income families, with lower tuition amounts available to the latter group; the average scholarship amount is still pretty low, at about $4,700 in grades 1–8. All students in private schools enrolling voucher students must take the state test. Over 34,000 students received a voucher in 2016–17, and the analysis focuses on the roughly 4,000 lowest-income students (i.e., those receiving the full voucher) who moved from a public to private school for the first time. They are matched to similarly poor public school peers in the same grade, year, and school as the student who receives a voucher and attends a private school the following year. They also match the two groups on a host of other observable characteristics like prior test scores and demographics such that they have similar likelihoods of receiving a voucher.
Descriptive findings show that low-income voucher students are moving into private schools substantially lagging behind their new peers by up to half of a standard deviation (SD). But the headline is that, overall, students who receive a voucher experience an average annual loss in math of 0.10 SD after attending a private school, compared to matched public school students. The biggest losses in math occur during the first and second years; they still lag behind in year three. But by year four, those who remain have regained what they lost, and statistically significant differences between voucher and public school students in math achievement disappear. That being said, the lowest achieving students who receive a voucher and attend a private school tend to return to the public school, so these later estimates are a little noisier and may in part be measuring persistence.
As for English Language Arts (ELA), there are no significant differences overall (both groups perform similarly), though voucher students attending Catholic schools see small gains. Moreover, when looking at achievement over time, voucher students have slightly higher ELA achievement by year four, after recovering from losses in years one and two. What about those who return to the public school system? Students who receive a voucher and then go back to a public school in a later year score 0.24 SD lower in math and 0.13 SD lower in ELA. Results are similar in the program for both white and black students.
Analysts posit an explanation for the empirical rough ride: Maybe we’re seeing voucher students adjust to their new schools and their schools adjust to them? It seems likely to this reader that the voucher movement would experience its own set of growing pains, especially when stretched across the state, similar to the scale-up challenge we’ve seen in the charter movement. In the face of multiple, tepid voucher findings, though, it’s our own tolerance and understanding of such growing pains that is being tested.
SOURCE: R. Joseph Waddington and Mark Berends, “Impact of the Indiana Choice Scholarship Program: Achievement Effects for Students in Upper Elementary and Middle School,” Center for Research on Educational Opportunity, University of Notre Dame (ongoing).
The Center on Reinventing Public Education’s (CRPE) latest report asks whether public transportation can improve students’ access to Denver’s best schools of choice, and the answer appears to be “no.” Denver’s geography, diffuse population centers, and distribution of quality school seats in relation to poor students already complicate the school district’s own transportation efforts to make quality seats accessible in terms of travel time from home to school. As an experiment, CRPE researchers compared district residency and public transit route data via Google Maps Directions Application Program Interface to calculate travel times between each student’s home address and the schools to which they could have applied. The hope was to make quality schools more accessible to those who need them most—just like a parent researching such schools might do—but the results are discouraging.
Based on their analysis, CRPE finds that just 55 percent of low-income students could attend a high-performing school within thirty minutes of their home on public transit; that percentage falls to 19 percent for schools within fifteen minutes’ travel time. In other words, most low-income students would continue to face long commutes to the city’s top schools when using public transportation, despite a choice-friendly atmosphere that includes a common application and lottery-matching system that strives to give all families their most-preferred option. We know that proximity often trumps quality considerations when parents are given a choice, and extant transportation options in Denver and other large cities do very little to change the balance.
In the end, CRPE’s researchers conclude that it would be easier to move good schools to the students (a huge leap based on these experimental findings) than to move students to good schools in a reasonable amount of time by using existing modes of transportation.
But hold up a sec. This experiment was conducted using existing data—including modes, routes, timing, and extant bell schedules. While that might make for a decent enough research design, it doesn’t take into account that public transit is generally geared to move workers to jobs—from inner ring neighborhoods to downtowns on a nine-to-five schedule—and thus is fundamentally maladapted for student use. Before leaping to the conclusion that “moving all the schools” is the easiest means of improving access, additional CRPE-style simulations with experimental parameters should be attempted. A recent Bellwether Education Partners report takes that tack with regard to district-based transportation, urging better use of data and technology as its least invasive recommendation, and the same could be done for public transportation using CRPE’s experimental model. A transit-friendly bell schedule is an obvious variable with which to experiment, as is the incorporation of nodal transportation: picking students up at their doors in smaller, more flexible vehicles—like Uber fleets or on-demand autonomous vehicles (for those adventurous early adopters out there)—and delivering them to a central transfer point. Van pools and neighborhood circulator routes could also be tried in spreadsheet form.
One final note: CRPE’s researchers make one real attempt to avoid their “move all the schools” conclusion by floating the idea of interdistrict open enrollment, an option particularly attractive for sprawling Denver where the nearest good school for many underserved students might be in a neighboring district. Open enrollment is a small segment of school choice in the city currently, and it should be expanded as much as possible, but Fordham is here to tell you that that option hits a wall in terms of transportation too once it gets large enough and families will be back to square one.
With district-based transportation hidebound and in disarray in cities large and small, and public transportation generally ill-adapted for student travel, it is perhaps no wonder that CRPE’s researchers leaped from their disappointing findings to a non-transportation-based recommendation. Transportation is a basic necessity for most school children, regardless of the type of school they attend. And it could be so much more even than it has been—a bulwark against absenteeism, a reading lab on wheels, or even a rolling study hall complete with Wi-Fi. But only if we give up the old ways of thinking and understand that transportation must be about what students need, and not what is easiest for school districts or even transportation planners.
SOURCE: Betheny Gross, Patrick Denice, “Can Public Transportation Improve Students’ Access to Denver’s Best Schools of Choice?” Center on Reinventing Public Education (June, 2017).