The three miracles required for Donald Trump to become the patron saint of school choice
By Michael J. Petrilli
By Michael J. Petrilli
For months now, the buzz inside the beltway swamp has been that President Trump intends to propose a huge tax-credit scholarship program as part of his tax reform initiative. That expectation has led to lively debates, both on the page and on the stage, and earlier this month was the focus of Fordham’s annual Wonkathon.
As a supporter of vouchers for low-income children, I understand the appeal of such an initiative. An infusion of $20 billion a year, the eye-popping number Trump unveiled on the campaign trail, could help upwards of two to four million needy kids (at $5,000–$10,000 apiece) gain access to life-changing options. It would breathe new life into thousands of urban Catholic schools, institutions that have a proud legacy of serving poor and minority students well, but that are at risk of near-extinction. It could bring private school choice to major American cities in blue states that will almost surely never enact voucher programs on their own, including New York, Chicago, and Los Angeles.
So the prospect is compelling for school-choice enthusiasts. But so is the goal of Heaven on Earth. The question is how to get from here to there. Creating and sustaining a massive new federally-fueled voucher program will take more than a miracle. It will take three miracles.
Miracle number-one is getting a federal tax credit enacted in the first place. This feels much less achievable after the health-care debacle in Congress last week. It was always going to be hard. We know from past Senate votes on private school choice that the numbers simply aren’t there. Virtually every Democrat is a guaranteed “no” (save, perhaps, for Cory Booker); too few Republicans are a sure “yes.” Rural-state Republicans simply don’t have the incentive to buck their education establishments to support a policy that will bring very little bacon back to their own communities.
The conventional wisdom was that the tax credit plan would be attached to a whopping multi-dimensional tax-reform bill, which voucher-squeamish Republicans would vote for because they wanted the other goodies included in the package. (Using the legislative process called reconciliation would make such a bill filibuster-proof, so no Democrats would be needed.)
After last week, however, Republicans of all stripes know that they can sink the President’s agenda by holding out for what they want. He is in a much more precarious political position than most members of Congress are. It will only take a handful of GOP Senators demanding the removal of the tax credit/voucher initiative from the tax bill for the Administration to cave. Though less likely, something of the sort could also happen again in the House.
If somehow Team Trump overcomes those seemingly insurmountable barriers, miracle number-two will be finding the sweet spot between too much federal regulation and too little. There are massive risks on both sides of that equation.
The too-little regulation problem is obvious: As we’ve learned from state tax credit scholarship programs, financial shenanigans must be anticipated unless careful precautions are taken to avoid them. Scholarship-granting organizations will need to be audited, and there will need to be rules to ensure that donors to scholarship programs don’t “double dip”—getting credits on both their state and federal taxes.
And those are just the basics. A whole host of further—and truly perplexing—design questions will need to be answered by someone. Which students are eligible to receive scholarships? What about kids already in private schools? What must private schools do to qualify? May they use their standard admissions requirements, or must they accept all comers? And what about transparency and accountability requirements? Should voucher recipients take state tests? Should the results be made public? Should schools get kicked out of the program if they don’t show enough student growth?
One route is to let states decide. That’s what I’ve assumed they would do. But that has a major drawback: It implies that states would have to opt-in to the program, and plenty of blue states will simply refuse. So if the goal is to create school choice opportunities from coast-to-coast, deferring to the states won’t work. That means making these calls at the federal level. That’s precisely what many school-choice supporters fear—moving away from state-by-state decisions on these sensitive issues to a one-size-fits-all approach with Uncle Sam as regulator-in-chief.
Assume that, with a Republican Congress writing the legislative language and the Trump Treasury Department writing any necessary regulations, the feds somehow thread this needle effectively, and apply a light touch from Washington. Will that approach stick as years pass?
Only with the help of miracle-number three—keeping Democrats from killing the program, or regulating it to death, when they inevitably return to power. There’s little doubt that the progressive base will be champing at the bit to get rid of “TrumpChoice.” Fear not, say some advocates, because by the time they regain the levers of power, the program will have created millions of new beneficiaries. Just as Republicans learned how hard it is to take away ObamaCare from people grown accustomed to health insurance that they didn’t have before, so too would Democrats learn how hard it is to take away private school scholarships from families now enjoying them.
Perhaps. But surely a Warren or Sanders Administration would get right to work changing the regulations governing the program. They might require participating private schools to accept all applicants, regardless of religion or sexual orientation, or whether they meet the school’s academic requirements; let students opt-out of religious instruction; mandate that scholarship students take state assessments; and on and on. Now many private schools would face an agonizing choice: Give up a funding stream that they have come to depend on, or give in on basic questions of religious identity and school culture. This could be the end to American-style private education as we know it.
***
No one has ever called Donald Trump a saint. Still, he would need to be a master-politician to achieve the first two miracles required to make his big school choice initiative a success, and downright providential to achieve the third. A wing and a prayer doesn’t begin to capture what’s needed.
I would love to be proven wrong. I pray that I am. I suspect I am not.
So far, watching state ESSA plans roll in has been a bit like rooting for the Washington Redskins (or, if you prefer, the Washington Football Team). Every fall starts with fresh hopes. Yet every spring fans are asking the same questions: What went wrong? Why can’t management learn from its mistakes? Why does it always have to be this way?
Meanwhile, Broncos fans have enjoyed John Elway, Peyton Manning, and the second most Super Bowl appearances in NFL history.
Obviously, building high-performing education system is harder than building a winning football team. But as in football (or any sport, really) it helps to focus on the fundamentals in education policy because you won’t get far without them.
So with that in mind, here are four ways that Colorado’s plan for rating schools, like its annoyingly successful football team, gets the fundamentals right:
1. Colorado uses a mean scale score as its measure of achievement.
Instead of using proficiency rates to gauge achievement, Colorado will take an average of students’ test scores, which sounds simple (like blocking and tackling) because it is simple—assuming you do it.
As Morgan Polikoff and other accountability scholars have argued, “a narrow focus on proficiency rates incentivizes schools to focus on those students near the proficiency cut score, while an approach that takes into account all levels of performance incentivizes a focus on all students.” ESSA requires that states measure “proficiency,” which literally means “a high degree of competence or skill.” But it doesn’t say anything about proficiency rates, so there’s nothing to prevent states from adopting a broader measure.
Unfortunately, when it comes to this issue, Colorado is still in the minority because many states have yet to move beyond their NCLB-era obsession with proficiency rates. What does it say about education reformers that it’s taken us a decade to wrap our heads around the concept of an average?
2. Colorado uses a true growth model.
Instead of a “growth-to-proficiency” model or some other contrivance, Colorado uses a bona fide growth model to gauge the progress a school is making with students. Specifically, it uses a “student growth percentile” model, which compares the progress of each student at a school to the progress of similar students at other schools and then assigns the student a “percentile rank” between zero and ninety-nine based on how his or her progress stacks up.
The advantage of this approach is that it is grounded in reality rather than the fantasies of policymakers or reformers. Instead of trying to specify the amount of progress students should make based on some utopian ideal, it rewards or sanctions schools for making more (or less) progress than one might expect under the circumstances.
If you doubt the wisdom of this approach, just ask yourself this: Which is the better measure of a running back’s contribution—touchdowns or yards per carry? Would you cut Barry Sanders because he wasn’t scoring on every play? Or would you try to get the ball in his hands more often?
3. Colorado assigns more weight to growth than achievement.
According to its draft plan, Colorado hasn’t finalized its weighting system yet. But the draft does cite the state’s weights from 2016:
In 2016, for elementary and middle schools 40% of points came from Academic Achievement measures and 60% from Academic Growth measures, while for high school the weighting was 30% Academic Achievement, 40% Academic Growth, and 30% Postsecondary and Workforce Readiness. Once the Colorado State Board of Education decides on the relative weights between indicators, CDE will update the state plan with this information.
I’m making a bit of a leap here, but to me this passage suggests that Colorado will continue favoring growth over achievement (unlike most states). Hopefully that’s the case, because right now Colorado is one of the few states with a school rating system that isn’t the accountability equivalent of the old-school T formation. For some reason, even though we’ve known for decades that different schools face different challenges, only a few states have embraced this insight by creating systems that judge schools based on things they control. No wonder we haven’t seen as much progress as we’d like.
4. Colorado protects the football
I was tempted to put this first. However, for the sake of D.C.’s long-suffering football fans, I decided to bury it below the fold.
Obviously, ESSA gives states an opportunity to experiment with new measures, which is great. But without a little practice, there are lots of ways for this to turn into a fumble. And the fact so many states are still assigning the same weight to (dumb) proficiency rates and the outputs of (smart) growth models doesn’t inspire much confidence in their ability to innovate.
From the text of Colorado’s plan, it’s clear that the state takes a hardheaded approach to ensuring the validity and reliability of new measures, such as chronic absenteeism, which is reassuring under the circumstances.
Pretty much everyone wants to get beyond test scores. But that’s easier said than done, so it’s best to take it slow and “protect the football.” Let’s start by fixing what’s obviously broken, and then move on to the hard stuff.
***
Unless you still believe in holding schools accountable for things they can’t control—and in those bold timelines politicians and bureaucrats are so fond of concocting—a school rating system like Colorado’s should suit you. After all, if you believe in top-down accountability, it will point you toward those schools that are truly failing their students. And if you believe in bottom-up accountability, it will point parents toward those schools where kids are making the most progress.
Either way, there’s nothing wrong with keeping things simple and focusing on the basics. Ask the pros: They’ll tell you that’s how most championships are won.
School funding policies continue to be a subject of intense debate across the nation. Places as diverse as Alabama, Connecticut, Illinois, Kansas, Maryland, and Washington are actively debating how best to pay for their public schools. According to the Education Commission of the States, school finance has been among the top education issues discussed in governors’ State of the State addresses this year.
States have vastly different budget conditions and a wide variety of policy priorities. No one-size-fits-all solution exists to settle all school funding debates. But there is a common idea that every state can follow: Implement a well-designed school funding formula, based on student needs and where they’re educated. Then stick to it.
A recent study commissioned by Fordham and researched by Bellwether Education Partners looks under the hood of Ohio’s school funding formula. Our home state’s formula is designed to drive more aid to districts with greater needs, including those with less capacity to generate funds locally, increasing student enrollments, or more children with special needs. In large part, Ohio’s formula does a respectable job allocating more state aid to the neediest districts. According to Bellwether’s analysis, the formula drives 9 percent more funding to high-poverty districts. This mirrors findings from the Education Trust which also found that Ohio’s funding system allocates more taxpayer aid to higher poverty districts.
Still, the Buckeye State has much room for improvement in its funding policies. And it’s worth highlighting three lessons from the study, as they illustrate challenges other states might face when designing a sound funding formula.
First, states should allow their formula to work—and not create special exceptions and carve outs. Our study found that the majority of Ohio districts have their formula aid either capped or guaranteed, meaning allotments are not ultimately determined by the formula. Instead, caps place an arbitrary ceiling on districts’ revenue growth, even if they are experiencing increasing student enrollment. Conversely, guarantees ensure that districts don’t receive less money than a prior year—they “hold harmless” districts even if enrollment declines. While caps and guarantees may be necessary during a major policy shift, allowing them exist for perpetuity, as Ohio does, undermines the state’s own formula. Ideally, all districts would receive state aid according to a well-designed formula. They shouldn’t receive more or less dollars through carve outs such as funding caps and guarantees.
Second, policymakers in choice-rich states need to make clear that funds go to the school that educates a student—and not necessarily her district of residence. Ohio has a wide variety of choices, including more than 350 charter schools, several voucher programs, and an inter-district open enrollment option. Yet the state takes a circuitous approach to funding these options, creating unnecessary controversy and confusion. The state first counts choice students in their home districts’ formula and then funds “pass through” to their school of choice. This method creates the unfortunate perception that choice pupils are “taking” money from their home districts, when in fact the state is simply transferring funds to the school educating the child. (For more on Ohio’s convoluted method to fund schools of choice, check out our short video.) To improve the transparency of the funding system in Ohio, we recommend a shift to “direct funding.” Under such an approach, the state would simply pay the school of choice without state dollars passing through districts.
Third, states should ensure the parameters inside the formula are as accurate as possible. Ohio, for example, faces a problem when assessing the revenue-generating capacity of school districts. A longstanding state law generally prohibits districts from capturing additional revenue when property values rise due to inflation, unless voters approve a change in tax rates. But this “tax reduction factor” is not accounted for in the formula, leading to an underestimation of the state’s funding obligations. Solid gauges of property and income wealth, along with sound measures of enrollment and pupil characteristics, are essential ingredients to a well-designed formula.
The realm of school finance is vast, encompassing a seemingly endless number of challenges. We don’t cover it all in this one report. But state policymakers would be wise to focus on the design and implementation of the school funding formula. It’s a key policy lever in efforts to create a fairer and more equitable funding arrangement for all students, regardless of their zip code or school of choice. Creating a solid formula—and ensuring its use—is hard work, but it might be our best bet for settling the debates over school funding.
On this week's podcast, special guest Eric Eagon, a senior director at the PIE Network, joins Mike Petrilli and Alyssa Schwenk to discuss why policymakers ought to pay more attention to teachers and administrators. During the Research Minute, Amber Northern examines the peer effects of computer assisted learning.
Marcel Fafchamps and Di Mo, “Peer effects in computer assisted learning: Evidence from a randomized experiment,” The National Bureau of Economic Research (February 2017).
This new study from CALDER examines the type of students that modern career academies attract and the causal impacts of participation on various outcomes. Recall that seminal, lottery-based research from MDRC on career academies in the 90’s found no effects on high school graduation or initial college outcomes but did find that males had higher salaries over the long term and were more likely to “form and sustain families.” The current authors suggest that today’s career academies—in part due to new interest in college and career readiness plus the recent economic recession—mean that we need a new generation of experimental research on the latest CTE models. And they aim to deliver.
Analysts examine career academies in Wake County Public School System (NC), which has twenty academies (such as the Academy of Finance and the Academy of Sustainable Energy Engineering) inside fourteen high schools. For the descriptive part of the analysis, they examine all first-time ninth graders in 2014–15 and 2015–16. They find that academy enrollees are less likely to be minority, more likely to be male, and generally higher achieving than their peers within the same schools who did not enroll in an academy.
For the causal analysis, they examine outcomes for one large academy (Apex Academy of Information Technology) that has admitted students by lottery since 2009–10. They receive about twice as many applications as seats, so researchers are able to identify causal effects of participation in the career academy on measures such as performance, high school graduation, and college going for roughly 500 students. Unlike students who attend the regular high school, Apex Academy has technology-based paid internships for all juniors, a four-year sequence of IT courses, and soft skills training, among other offerings.
Analysts find that enrolling in Apex Academy increases the likelihood of graduating from high school by about 8 percentage points, which is mostly driven by the impacts on male students. The academy also increased the likelihood of college attendance within one year of graduation by about 8 percentage points as well, though it had no impact on the type of college enrolled. Again, this overall effect on college-going is driven by males (estimates show that about 92 percent of male attendees in the academy will attend college versus about 78 percent of their non-academy male counterparts). The career academy also reduced the number of absences for a typical ninth grader by about 1.4 days.
On the other hand, there is little to no effect of Apex Academy enrollment on academic achievement on average or for specific subgroups as measured by the ACT, which is the mandatory exam for all high school juniors in the Tar Heel State. Nor is there impact of Apex enrollment on AP course taking or passing the AP exam.
In a succinct summary of their research, authors state, “Our evidence suggests that boys responded to the technology-rich, applied academic setting of Apex Academy of Information Technology while girls did not.” Obviously the study focused on one career academy located in technology-rich Research Triangle so we can’t apply these findings elsewhere. Still, we’re starting to see in the school choice literature a pattern of particular impacts for particular subgroups (low-income, females, males, students in urban schools, etc.) which hopefully sets the stage for the next chapter of empirical research—the conditions under which different choices prove most beneficial for which students.
SOURCE: Steven W. Hemelt et al., “Building better bridges to life after high school: Experimental evidence on contemporary career academies,” CALDER (January 2017).
A recent report from Education Northwest extends previous research by the same lead researcher, drilling down into the same dataset in order to fine-tune the original findings. That earlier study (June 2016) intended to test whether incoming University of Alaska freshmen were incorrectly placed in remedial courses when they were actually able to complete credit-bearing courses. It found that high school GPA was a stronger predictor of success in credit-bearing college courses in English language arts and math than college admissions test scores. The follow-up study deepens this examination by breaking down the results for students from urban versus rural high schools, and for students who delay entry into college.
In general, the latest study’s findings were the same. Except for the students who delayed college entry, GPA was generally found to be a better predictor of success in college coursework than were standardized test scores. It stands to reason that admissions test scores would better represent the current abilities of students who delayed entry into college (call it the final “summer slide” of one’s high school career), and indeed the previous study showed that students who delayed entry were several times more likely to be placed into developmental courses than were students who entered college directly after high school graduation. But does this mean that colleges err when they use such test scores to place incoming students? The Education Northwest researchers believe so, arguing that colleges should use high school GPAs in combination with test scores, with the former weighted more highly since GPAs can more effectively measure non-cognitive skills they deem more relevant to college success.
But it is worth noting that both of their studies are limited by a few factors: First, there are only about 128,000 K–12 students in all of Alaska, and its largest city, Anchorage, is about the same size as Cincinnati. A larger, more diverse sample (Baltimore, New York, Atlanta, or even the Education Northwest’s hometown of Portland, Oregon) could yield different results. Second, there is no indication that the University of Alaska students were admitted or placed solely on the basis of admissions test scores. Sure they’re important, but not every school puts Ivy League emphasis on test scores to weed out applicants. Third, the “college success” measured here is only a student’s first credit-bearing class in ELA and math. That seems like a limited definition of success for many students; depending on one’s major, math 102 is harder than math 101. Fourth, “success” in these studies merely means passing the class, not getting an A. If a student’s high school GPA of 2.5 was better at predicting his final grade in the college class (a D) than was his SAT score (in the 50th percentile), only Education Northwest’s statisticians should be happy about that. A more interesting and useful analysis would look at the difference in success rates between students with high versus low GPA, students with high versus low test scores, or students who earned As versus Ds in the college courses.
Previous studies have shown correlation between high GPA and high ACT scores. There’s lots of talk that test scores are (but shouldn’t be) the most important factor when it comes to college admissions decisions, and the “who needs testing?” backlash at the K–12 level appears to have reached upward to colleges. This study is not the silver bullet that’s going to slay the admissions testing beast, but more care must be taken at the college level to avoid incorrect and money-wasting developmental placements. It is to be hoped that at least part of the answer is already in development at the high school level (high standards, quality curricula, well aligned tests, remediation/mastery) and that colleges will be able to jump aboard and calibrate their admissions criteria to maximize high levels of performance, persistence, and ultimately degree attainment.
SOURCE: Michelle Hodara, Karyn Lewis, “How well does high school grade point average predict college performance by student urbanicity and timing of college entry?” Institute of Education Sciences, U.S. Department of Education (February, 2017).