Welcome to the latest installment of the Regulation Wars, a long-running family quarrel that centers on the perceived tensions between two of the charter school movement’s founding principles: innovation and execution (or, if you prefer, autonomy and accountability).
Advocates of the former worry that a single-minded focus on the latter will stifle new and potentially fruitful ideas and practices. So, in an effort to shed additional light on the subject, a new study by Ian Kingsbury, Jay Greene, and Corey DeAngelis tries to quantify “innovation” and understand how it relates to policymakers’ efforts to ensure a morally acceptable level of execution—that is, to regulation, mostly in the name of accountability. More specifically, the study focuses on the relationship between states’ widely varying regulatory environments and the ostensibly innovative characteristics of the 1,438 charter schools that opened in the United States between 2015 and 2017.
To quantify regulation, the authors rely on the scores that the National Association of Charter School Authorizers (NACSA) gave to states between 2014 and 2016, which range from 0 to 33 (with a higher score signifying a tighter approach). To quantify innovation, they first examined individual schools’ websites and coded them to see how they varied along five dimensions: pedagogy, curriculum, populations targeted, setting, and themes. They then assigned each identified characteristic a point value “equivalent to the inverse of its [national] prevalence.” (In other words, they gave more points to schools that were more unusual.) And finally, they added up the points that individual charter schools were awarded for each characteristic/dimension.
Based on the resulting totals, the authors’ estimate that a 1-point increase in a state’s NACSA score was associated with a 0.014 standard deviation decrease in the innovativeness of the average charter school that opened within its boundaries. Similarly, they find a negative relationship between a state’s NACSA score and the innovativeness of the average school’s “pedagogy,” “setting,” and “theme.” Yet, as they acknowledge, “the abstract nature of the total innovation score and components makes it difficult to interpret results.” So, to concretize things, they also consider the relationship between regulation and the twenty-four characteristics that are the basis for the five dimensions of “innovation.” Of these, six are significantly related to regulation. For example, states that take a tighter approach to regulation have fewer schools in “virtual” and “hybrid” settings, as well as fewer with “experiential” pedagogies.
The existence of something akin to these more granular relationships is fairly intuitive. After all, even most accountability hawks acknowledge that there are trade-offs. Insofar as authorizers prioritize test-based academic growth—which they should, given the strong relationship between short-term test score gains and positive long-term student outcomes—that necessarily rubs up against a laissez-faire, anything-goes embrace of “innovation.” Still, the mere likelihood of such a tradeoff does not imply that it is troublingly severe, that the authors have succeeded in quantifying it, or that sober-minded education policymakers should spend a lot of time worrying about it in an era of widespread learning loss and justified alarm about the trajectory of national achievement trends.
From a purely normative perspective, an obvious problem with the authors’ approach is that it is content neutral. So, for example, a school that was grounded in Satan Worship would count as highly innovative (provided it didn’t start a movement), as would one that imparted no knowledge whatsoever (as seems to be the case for many virtual schools). Moreover, if the concern is really with innovation, at least as traditionally understood, then timing is an issue, as is clear from the authors’ self-generated list of ostensibly innovative characteristics, which includes longstanding programs such as Core Knowledge (est. 1986), Waldorf (1919), and Montessori (1907), not to mention “single-sex” education (Harvard, circa 1636) and “project-based” learning (the Pleistocene).
As those examples suggest, what the authors’ measure actually captures is more akin to “programmatic diversity” than “innovation.” Except that even “diversity” is being generous, since what the measure really captures is how similar a particular state’s charter sector is to the national charter sector (rather than how many different types of schools a state’s charter sector includes). Which simply isn’t the same thing as diversity or innovation, no matter how much the authors may want it to be.
For obvious reasons, the dissimilarity between a state’s charter sector and the rest of the charter school movement could be related to the types of students it serves. Yet, as noted, a school’s “targeted population” is itself a dimension of innovation and/or diversity, meaning that states with disproportionate numbers of students who are “disabled” or “at risk of dropping out” are likely rewarded or penalized insofar as their charter sectors are representative.
Similarly, it seems obvious that charters in rural states are more likely to have virtual and hybrid schools than those in more urbanized states, and that more rural states (which tend to be redder) may be likelier to take a hands-off approach to regulation. So even if one grants that more virtual schools are a good thing (and, to be clear, one does not), it’s not obvious that tighter regulation is the cause of their relative scarcity in places like New Jersey. In other words, in addition to being fundamentally correlational, many of the relationships the study uncovers are plausibly explained by factors that states cannot control and that have nothing to do with regulation.
All of which makes it hard to swallow the authors’ claim in a recent National Review article that “we know heavy charter regulation has this negative effect on diversity and innovation in the charter sector because we actually measured it in our new peer-reviewed study.”
No, we don’t. No, they didn’t. And the mere fact that a study is “peer reviewed” doesn’t mean it should be taken seriously.
Yes, one of the hopes was that charters would be innovative, and some of them are. But with or without those burdensome regs, genuinely useful innovations seem to be a rarity in K–12 education, which is why the charter school movement has often disappointed on that score and the focus has understandably turned to execution, where there is far more rigorous evidence of their continuing success.