Is a charter school likely to fail? Look at the application.
By Amber M. Northern, Ph.D. and Michael J. Petrilli
By Amber M. Northern, Ph.D. and Michael J. Petrilli
Some charter schools do far better than others at educating their students, a reality that has profound implications for charter-goers, and for the charter sector writ large. Painful experience also shows that rebooting or closing a low-performing school is a drawn-out and excruciating process that often backfires or simply doesn’t happen. So what if we could predict which schools are likely not to succeed—before they even open their doors? If authorizers had that capability, they could select stronger schools to launch, thereby protecting children and ultimately leading to a higher-performing charter sector overall.
A new Fordham study, Three Signs That a Proposed Charter School Is at Risk of Failing, employs an empirical approach to do just that. Authors Dr. Anna Nicotera and Dr. David Stuit, respectively senior associate and co-founder of Basis Policy Research, coded charter applications for easy-to-spot indicators and used them to predict the schools’ academic performance in their first years of operation.
Authorizers rejected 77 percent of applications from a sample of over six hundred applications from four states. They worked hard at screening those applications, seemingly homing in on a common set of indicators—“red flags,” if you will—whose presence in or absence from applications made it more likely that they would reject the application.
Yet despite the vigorous screening process that authorizers used to determine which applicants to turn down and which to entrust with new schools, 30 percent of the approved applications in the study led to charter schools that performed poorly during their first years of operation. Given that research has shown that a school’s early-year performance almost always predicts its future performance, those weak schools are unlikely to improve.
The study found three risk factors that were present in the approved applications that also turned out to be significant predictors of future school performance in the initial years:
The presence of these risk factors in charter applications significantly boosted the probability that the school would perform poorly during its first years of operation. When an application displayed two or more of these risk factors, the probability of low performance rose to 80 percent.
Moreover, the study also found indicators that made it more likely authorizers would reject the application entirely. Those included, among others:
Here’s what we make of those findings.
First, authorizers already have multiple elements in mind that they use to screen out applications. The factors named above that are already linked to rejection may well predict low performance, had the schools displaying them been allowed to open. But since those schools did not open, we have no way of knowing for sure. Still, the authorizers we studied—and their peers throughout the country—would probably be wise to continue to view these factors as possible signs of likely school failure and to act accordingly.
Second, we were somewhat surprised to see that an applicant’s intention to use a child-centered, inquiry-based instructional model (such as Montessori, Waldorf, or Paideia) made it less likely that the school would succeed academically in its first years. It’s hard to tell what’s going on here. Some of these pedagogies, expertly implemented, can surely work well for many children. But they are not intended to prepare students to shine on the kinds of assessments that are typically used by states and authorizers to judge school performance—in other words, the same tests that our research team used to judge quality for purposes of this analysis.
We do not mean to discourage innovation and experimentation with curriculum and pedagogy in the charter realm going forward. That sector’s mission includes providing families with access to education programs that might suit their children and that might not otherwise be available to them. Fordham is a charter authorizer itself (in our home state of Ohio) and we’re keenly aware of the need to balance the risk that a new school may struggle academically against a charter’s right to autonomy and innovation. Well-executed versions of inquiry-based education surely have their place in chartering. But the present study finds that they boost the probability of low performance as conventionally measured.
Third, let’s acknowledge that quality is in the eye of the beholder. Many of these child-centered schools aren’t “failing” in the eyes of their customers. The parents who choose them may not care if they have low "value added" on test scores. But authorizers must balance parental satisfaction with the public’s right to assure that students learn. Schools exist not only to benefit their immediate clients but also to contribute to the public good: a well-educated society.
Yes, it’s a tricky balance, especially in places where dismally performing district schools have been the only option for many youngsters. The best we can say to authorizers is to exercise your authority wisely. Consider the quality of existing options, plus a prospective charter school’s ability to enhance those options—not only academically, but in other ways fundamental to parents and the public. Pluralism is an important value for the charter sector, and is worth taking some risk to achieve.
Fourth, these findings aren’t a license for lazy authorizing. The trio of significant indicators that we found helps to identify applications that have a high probability of yielding struggling charter schools. But these aren’t causal relationships. Nor do they obviate an authorizer’s responsibility to carefully evaluate every element of a charter application. If our results are used to automatically reject or fast-track an application, they have been misused. Yet they ought, at minimum, to lead to considerably deeper inquiry, heightened due diligence, and perhaps a requirement for additional information.
Deciding whether to give the green light to a new charter school is a weighty decision. This report gives authorizers, operators, and advocates one more tool in their toolkit.
Tenure arrived in K–12 education as a trickle-down from higher ed. Will the demise of tenure follow a similar sequence? Let us earnestly pray for it—for tenure’s negatives today outweigh its positives—but let us not count on it.
Almost every time I’ve had an off-the-record conversation in recent years with a university provost, they’ve confided that their institutions are phasing tenure out. Sometimes it’s dramatic, especially when prompted by lawmakers, such as the changes underway at the University of Wisconsin in the aftermath of Governor Scott Walker’s 2015 legislative success, and the bills pending in Missouri and Iowa.
Often, though, the impulse to contain tenure on their campus arises within the institution’s own leadership and takes the form of hiring far fewer tenured or tenure-track faculty and filling vacancies with what the American Association of University Professors terms “contingent” faculty, i.e., non-tenured instructors, clinical professors, adjunct professors, part-timers, or—especially in medical schools—severing tenure from pay such that professors may nominally win tenure but that status carries no right to a salary unless they raise the money themselves from grants, patients, etc.
This is happening across much of U.S. postsecondary education, and the data show it. Whereas in the mid-1970’s tenured and tenure-track faculty comprised 56 percent of the instructional staff in American higher ed (excluding graduate students that teach undergrads), by 2011 that figure had shrunk to 29 percent. In other words, seven out of ten college instructions were “contingent” employees—and almost three quarters of those were part-timers.
The data since 2011 are scanty, but I’ve heard enough anecdotes to be pretty sure these trend lines have continued and perhaps steepened, if for no other reason than the constant pressures to hold down costs, on the one hand, and to innovate, on the other. Tuitions are out of control, state subsidies are withering, and although college presidents and provosts may have to contend with unions, as well as with tradition and current contracts, they still have a fair amount of flexibility in whom and how they hire, promote, and retain (or not).
In the K–12 world, however, tenure remains the norm for public school teachers in the district sector, vouchsafed in most places by state law and big-time politics, as well as local contracts, even in so-called “right to work” states. It may be achieved after as few as three years of classroom experience and be based on nothing more than “satisfactory” evaluations from a novice teacher’s supervisor during that period. Unfortunately, we have ample evidence that such evaluations are nearly always at least “satisfactory,” if not “outstanding.” Although many states and districts made worthy changes to their evaluation practices in response to long-ago-spent Race to the Top dollars, the pushback against those changes has been intense, the methodology usually had flaws (especially when linking student learning to teacher performances), and lots of places have been backing down. One consequence is that it’s still virtually impossible to fire bad tenured teachers.
It’s useful to distinguish two issues. One is the complicated and politically fraught business of evaluating school teachers, both new and veteran: how to do it, who should do it, what should it be based on, what its consequences should be, what politics surround it, etc.
A semi-separate issue is the question of tenure itself: should teachers, often by the age of twenty-five, obtain guaranteed lifetime employment in a school system on the basis of a few years of satisfactory evaluations? For that matter, should anyone get lifetime tenure in any job? This question may legitimately be asked of civil servants, policemen, soldiers, and judges, as well as school and university faculty.
Personally—and speaking as a onetime tenured university professor who cheerfully gave it up for more rewarding pursuits—I’ve come to believe that only appellate judges deserve tenure (and it wouldn’t be hard to talk me into limiting that to the Supreme Court).
Whereas tenure for university professors had some early antecedents, and job protections for school teachers crept into Massachusetts as early as 1886, K–12 tenure as we know it today is mostly a mid-twentieth-century phenomenon. It rests on three pillars: simple job security and longevity, really a form of guaranteed continuing employment (often viewed as a fringe benefit that substitutes for higher pay); protection against diverse forms of discrimination, favoritism and capriciousness on the part of employers; and academic freedom, meaning in essence that instructors can almost never be fired on account of what they say or write.
The first of those is really just a matter of “terms of employment.” How valuable is job security to the employee—and to the employer? Would you rather earn $50,000 a year in a job that you know will continue indefinitely and does not depend on performance, or $75,000 in a job that is assured only for a several-year term and where renewal of the position hinges on your performance in it? As we read that school teachers in major cities are finding it hard to afford housing on their current salaries, one suspects that many, given the option, might leap for higher pay.
The second rationale for tenure—a safe harbor from discrimination and favoritism—deserves to be taken seriously, but the codification in constitutions and statutes of innumerable due process and anti-discrimination protections radically shrinks the rationale for inserting additional tenure provisions into employment contracts for this reason.
“Academic freedom” is a serious matter, too, and obviously was even more serious in the McCarthy era, but it’s significantly protected by the First Amendment and—for me at least—carries less oomph at a time when too many professors and teachers (at least in the humanities and social sciences) are tending toward indoctrination rather than teaching their pupils to weigh evidence, consider multiple interpretations, and seek truth and beauty as best they can. It’s also important to note that sundry court decisions have limited the extent of “free speech” in school classrooms, and to ask just how much difference does “academic freedom” make to a fourth grade teacher?
It’s no secret that the HR practices of private and charter schools—neither of which typically practices tenure—work far better than those of district schools from the standpoint of both school leaders and their students. That’s because the leadership team can generally employ (and deploy) the instructors they deem best suited to their pupils and they’re not obligated to retain any who don’t do a satisfactory job. They can be nimble in regrouping, restaffing, and redirecting their schools—and everyone who works there knows that’s how it goes. Nobody has a right to continued employment untethered to their own performance and the school’s needs. The employer has the right to change the shape, nature, and size of the organization, to redeploy human resources, to substitute capital for labor, to replace elbow grease and sitzfleisch with technology, and to hire and fire according to shifting pupil needs and organizational priorities.
That’s the direction higher education is gradually heading, mostly because tenure is too rigid and costly from the institutional perspective. At the K–12 level, private and charter schools are already there. A few states have taken steps to make tenure harder to obtain or even to move toward for its demise. Lawsuits are underway in several more places, and I have no doubt that kindred efforts will continue, via both the voting booth and the courtroom.
Though often framed in term of “making it easier to fire bad teachers,” that’s not the main point of such reforms, nor should it be. The main point is to make it possible to run the kinds of schools that kids deserve to attend at a cost the taxpayer can afford to pay—and to bring the profession of school-teaching into the twenty-first century.
Back in November, I praised the Obama Administration’s Every Student Succeeds Act accountability regulations for permitting states to use performance indices in lieu of simple, problematic proficiency rates. Such applause is, of course, water under the bridge after congressional Republicans and President Trump repealed those rules and, instead of replacing them, will rely on promises, “Dear Colleague” letters, and other means that fall short of formal regulation.
Yet new praise is in order for Secretary DeVos et al.’s recently released “State Plan Peer Review Criteria,” which explains the process through which state ESSA plans will gain approval or rejection. It, like the regulations that came and went before it, expressly permits accountability systems that measure student achievement at multiple levels—not just “proficient”—using a performance index.
This is an important—even essential—innovation. Despite the good intentions of No Child Left Behind, which ESSA replaced a year ago, it erred by encouraging states to focus almost exclusively on helping low-performing students achieve proficiency and graduate from high school. Consequently, many schools ignored pupils who would easily pass state reading and math tests and earn diplomas regardless of what happened in the classroom—a particularly pernicious problem for high-achieving poor and minority children, whose schools generally serve many struggling students. This may be why the United States has seen significant achievement growth and improved graduation rates for its lowest performers over the last twenty years but lesser gains for its middling and top students.
The Every Student Succeeds Act requires the use of an academic achievement indicator that “measures proficiency on the statewide assessments in reading/language arts and mathematics.” There are, however, multiple ways to interpret this. And earlier versions of Department of Education regulations, under President Obama and Secretary King, seemed to expect states to use proficiency rates alone to fulfill this requirement and gauge school performance. Such a mistake would have merely extended NCLB’s aforementioned flaw.
That is why, in myriad reports and letters, we at Fordham and our colleagues in other organizations urged the previous administration to rethink that provision and allow states to rate achievement using a performance index. And it did. They ultimately permitted states to track the percentage of students who attain proficiency, while also giving schools additional credit for getting students to an “advanced” level of performance—such as level four on Smarter Balanced or level five on PARCC—a smart policy for encouraging sustained achievement for high-flying youngsters.
And, fortunately, the new peer review criteria allow this, too. For the academic achievement indicators, it asks, inter alia, “Does the description include how the SEA calculates the indicator, including...if the State uses one, a description of the performance index.”
By gauging the performance of students at three or more achievement levels instead of just one, such models also better inform educators, administrators, policymakers, and parents so they can make sounder choices. Parents of a high achiever can know whether their child’s school is doing well by similarly able students. Teachers and principals can track the effectiveness of their curricula and pedagogical techniques. And state education officials get more complete and nuanced pictures of their schools that better aid their decisions, rules, and regulations.
One of ESSA’s best features is the autonomy it gives back to states, and, much more than the previous administration, the new Education Department is acting in a way that honors the spirit of the law. Yet that also means the burden now truly rests with each and every state to create an accountability system that goes beyond proficiency when measuring pupil and school achievement.
Nine states and the District of Columbia have submitted plans to the U.S. Department of Education to meet their obligations under the Every Student Succeeds Act. By May 3, seven more will likely do the same. Yet in this first batch of seventeen, only six plan to rate schools’ achievement in a way that goes beyond simple proficiency. That’s not enough. If we see a similar pattern with the thirty-four states that submit plans in September, any child already achieving above the proficient line—especially those growing up in poverty—will continue to be an afterthought, a fate no child should suffer. Fortunately, there’s still time in all states, even those that have already submitted plans, to correct their course and ensure that all their students receive the education they deserve.
On this week's podcast, special guest Lindsey Burke, a director at the Heritage Foundation, joins Mike Petrilli and Alyssa Schwenk to discuss Arizona’s tax-scholarship program. During the Research Minute, Amber Northern examines how riding a school bus affects student absenteeism.
Michael Gottfried, “Linking Getting to School With Going to School,” Educational Evaluation and Policy Analysis (April 2017).
Prior studies have shown that English-language learners (ELLs) score lower on standardized tests in part because of their challenges in developing background knowledge and English vocabulary. A new experimental study in the Journal of Educational Psychology examines whether an intervention designed to enhance knowledge acquisition and reading comprehension for middle school ELLs actually does those two things.
The twenty-week intervention is called PACT (Promoting Adolescents’ Comprehension of Text). It is a set of instructional practices that have been modified to include more focus on content, academic vocabulary, and peer dialogue.
The study was implemented in 2013–14 in three school districts in both the southwest and southeast of the U.S. across seven middle schools with moderate to high concentrations of ELLs. Roughly 1,600 eighth-grade students participated in ninety-four U.S. History class sections taught by eighteen teachers.
Class sections were randomly assigned to forty-nine treatment and forty-five comparison classes, such that teachers could be teaching both treatment and comparison classes. Both groups taught the same content in three units—Colonial America, the Road to the Revolution, and the American Revolution—over the same amount of time (three times a week for fifteen to forty-five minutes, depending on the week). The comparison group was business as usual, but the treatment students discussed and watched videos that provided background information relative to each unit, learned new words connected with the content, engaged in critical reading of informational text about the unit, and participated an activity where they applied their knowledge (e.g., “What might have happened to prevent the Revolutionary War?”). Study staff provided professional development and ongoing support to teachers, observed classes, and listened to audiotapes to measure implementation fidelity, as well as any spillover effects in comparison classrooms (data showed high fidelity and limited spillover). And students were administered three assessments pre- and post-intervention: a general reading comprehension test, a content knowledge test, and a reading comprehension test in the content area.
There were two key findings.
First, treatment students, both ELLs and non-ELLs, outperformed comparison students on measures of content knowledge acquisition and content reading comprehension, but not on general reading comprehension. It’s hard to know why they didn’t outperform in the lattermost area, especially since experts make a compelling case that having background knowledge makes for overall better readers. Then again, it’s also the type of reading test that good teachers loathe (read a short passage, then answer three to six multiple questions about it), so maybe it’s not a bad thing that the test failed to pick up significant differences. My colleague Robert Pondiscio, an expert in literacy, tells me that it all makes sense to him since “reading comprehension is not a transferable skill; it’s heavily dependent upon domain knowledge such that there’s no reason to think that reading about U.S. history will boost comprehension of a passage about baseball.” (Did I mention how nice it is to work with smart people?)
Second, the proportion of ELLs in classes moderated the outcomes for content knowledge acquisition—meaning that the difference between ELLs and non-ELLs in classes widened as the class gained more ELLs, particularly above 12 percent. If ELLs were between 0 and 12 percent, ELLs and non-ELLs responded comparably to the intervention. Authors say that additional supports may be needed if the ratio of ELLs increase, and they advise schools to try to keep ELLs under the 12 percent threshold so that all students can benefit from the intervention.
The bottom line is that a focus on enhancing content knowledge is advantageous for ELLs and other students, but only when measured by content-based tests. That’s good news for those of us who believe in the power of a content-rich curriculum.
SOURCE: Sharon Vaughn et al., "Improving content knowledge and comprehension for English language learners: Findings from a randomized control trial," Journal of Educational Psychology (January 2017).
Can a student be so anxious that she can “psych herself out” when it comes to test performance? Can the perceived stakes be so high that no amount of test preparation could overcome the fear of failure? The interplay of the various components comprising these emotional patterns is the subject of a longitudinal study of college students undertaken by German researchers and published last month in the International Journal of Educational Research. Perhaps it’s not a one-to-one comparison, but given widespread concerns about test anxiety in the U.S. K-12 arena, perhaps this study offers some insight.
Researchers administered surveys to 92 students enrolled in a psychology course at the same university in Germany. These surveys were administered leading up to and after taking a required oral examination considered to be “one of the highest social evaluation stressors.” Surveys were administered again after students received their final grade for a total of three surveys. Their purpose was to gauge students’ academic self-efficacy (students’ beliefs regarding their ability to deal with high demands related to academic performance), their expected grade before the test, the relevance of success (how important it was to them to do well/pass), their received grade after the test, and their self-declared level of test anxiety (a “combination of worry- and emotionality-related adjectives”) at all three points. The various emotional and perceptual factors being studied here were correlated in a complex sequence. For example, researchers looked to see whether fear of failure was greater before taking the exam (when the student had some level of perceived control over the outcome) or after (when there was nothing to do but worry about the outcome), and correlated both with reported self-efficacy.
The findings, while limited by several factors, were interesting. Certain obvious correlations (students who viewed success on the test as more important were more likely to be anxious) were found to hold true, while certain surprises manifested as well. For instance, “text anxiety” levels were negatively correlated with final grade expected (anxiety affecting goals), although they were positively correlated with final grade received up to a point. Students who expressed more confidence in their ability to pass the test were more likely to pass than those with less confidence, but fear of failure was also positively correlated with test success, until the level of fear reached a “toxic” tipping point that began to inhibit success.
And therein lie some of the caveats. The fact that the students were all adults and likely had many high-stress test experiences under their belts likely influenced existing self-efficacy levels at least, if not other factors being measured. The small sample size, the dangers of self-reported data, and the focus on a single oral examination were also identified by the researchers as weaknesses in their study, which could be addressed in replication efforts.
So what might all this mean for those of us interested in American K-12 education? Boosting a child’s self-efficacy, helping her understand the true stakes of any given test, and minimizing her test anxiety are all the job of adults – teachers, counselors, principals, parents. Research such as this adds to the toolkits of adults tasked with the important job of supporting a student to her highest and best performance. Demonstration of ability – frequently under pressure of time or consequences—is unavoidable in adult life and the classroom is the perfect laboratory for children to learn to thrive in that world. Research such as this, replicated in an American K-12 setting, could provide invaluable tools in this important work.
Source: Julia Roick and Tobias Ringeisen, “Self-efficacy, test anxiety, and academic success: A longitudinal validation,” International Journal of Educational Research (March, 2017).