Draft ESSA regulations: A mixed bag for educational excellence
By Jonathan Plucker, Ph.D. and Brandon Wright
By Jonathan Plucker, Ph.D. and Brandon Wright
The Every Student Succeeds Act (ESSA) requires a “negotiated rulemaking” process whenever the Department of Education issues regulations under parts of the law pertaining to assessments, academic standards, and several other topics. This process requires a panel of experts, which the agency assembled in March. Their work thus far (they’ve met twice) has revealed major problems on the regulatory front concerning gifted and high-achieving students. These issues need immediate attention, including close scrutiny by the lawmakers who crafted ESSA.
As Education Week explains the process, panel members “essentially get together in a room and try to hammer out an agreement with the department. If the process fails, which it often does, the feds go back to the drawing board and negotiate through the regular process, which involves releasing a draft rule, getting comments on it, and then putting out a final rule.” The Department of Education assists this process with issues papers (which provide background), discussion questions, and draft regulatory language that the panel can edit based on its discussions.
Last week, the group tackled assessments, an area of ESSA that directly affects gifted and high-achieving students. Unfortunately, in the twenty-plus pages of draft regulations and seven issue papers that accompanied those discussions, the team never addresses gifted education (aside from one mention of “above-grade-level” testing). That’s a huge shame considering that many of the regs will affect high-achievers for years to come.
Of the seven issues the panel dealt with, three are of particular interest to advanced students: computer-adaptive testing; advanced math assessments for eighth graders; and locally selected high school assessments. Here we explain what each could mean for high-achieving students, as well as the problems we see ahead as states seek to promote educational excellence and shrink achievement gaps under these regulations.
Computer-adaptive testing
Before ESSA, a No Child Left Behind provision required all students to take the same tests. As it was interpreted by both the Bush and Obama administrations, the provision also barred material from those tests that was significantly above or below grade level. As a consequence, most current assessments do a lousy job of measuring academic growth by pupils who are well above grade level. They just don’t contain enough “hard” questions to allow reliable measurement of achievement growth at the high end. In other words, the ceiling on those tests is so low that most advanced students can pass them even before the school year starts.
Thankfully, ESSA allows computer-adaptive tests (CATs)—such as those developed by the Smarter Balanced consortium—to be structured and administered in ways that measure growth at every level, without overburdening any student with a ridiculously long paper-and-pencil test. And if combined with a real academic growth accountability model—one that holds schools to account for ensuring that all their students make progress over the course of the school year—this can finally create incentives for schools to attend to the further learning of their high-achievers. Making sure that every state allows for above-grade-level testing is critically important as we implement ESSA.
Unfortunately, the CAT issue paper and draft regulations prepared by the Department of Education do little to encourage any of this. In fact, they make it harder than ESSA’s drafters intended by mandating that any test selected by a state “must measure a student’s academic proficiency based on the challenging State academic standards for the grade in which the student is enrolled” (emphasis added). Moreover, when the draft regs list all nine of the subgroups for which scores must be disaggregated, students “at each level of achievement” are problematically omitted—even though they’re among the subgroups ESSA requires states to include on their annual school report cards. In other words, the department has not moved away from its single-minded fixation on grade-level testing.
Yet it makes no sense for a ten-year-old kid who is already reading or doing math at the seventh-grade level to take a test with questions designed for fourth graders. We can understand the logic of making below-grade-level students take on-grade level questions, so as not to lower expectations for them. But why do the same for above-grade-level students? This can only give their schools further reason to continue neglecting them.
And put yourself in the shoes of a teacher with a fourth grader who is advanced in math. You know before she even takes the test that she’s above grade level, so data confirming that is a waste of both your and the student’s time. (Hello, opt-out parents!) But knowing that this student is learning at the sixth-grade level according to your state standards would provide you with a good foundation for helping that student continue to grow in math.
The Education Department’s obvious indifference to these issues is truly worrying—a significant shortcoming in the regulatory process that the panel and the department both ought to remedy before this process goes farther.
Advanced math assessments for eighth graders
Under ESSA, states must administer the same math and ELA assessments to all students in grades 3–8. But the law makes an exception for eighth-grade students taking advanced math coursework. According to the new draft regulations interpreting that provision of ESSA, states could measure these kids’ academic achievement using the end-of-year math assessment the state gives it high school students.
At first glance, this flexibility looks like good news for high-achievers—especially in states that don’t assess above-grade-level content using computer-adaptive tests. But last week, the panel added a further provision to the draft regs that could make it very difficult for states to take advantage of these exemptions. The panel would require any such state to demonstrate that it “offers all students in the State the opportunity to be prepared for and to take advanced mathematics coursework in middle school.”
The provision sounds both onerous and mysterious. Advanced coursework is, by definition, meant for students achieving at a higher level than most of their peers. How, then, could a state demonstrate that it offers all students the opportunity to be prepared for such coursework? And would a state, under this provision, have to offer advanced math to all students, regardless of their preparation? It’s the type of pie-in-the-sky regulation that might rip a hole in an otherwise beneficial policy. And every advocate and policy maker who cares even a little about these kids ought to shout from the mountaintops about how crazy this is and get it changed before the regulations are final.
Locally selected high school assessments
This provision of ESSA allows districts to use “nationally recognized” tests for the high school assessments that the law mandates. The regs define a “Nationally recognized high school academic assessment” as “an assessment of student knowledge and skills of high school students that is administered in multiple States and used by institutions of higher education in those States for the purposes of entrance into post-secondary education or training programs.” That’s how policy people say, “Your district can use the SAT or ACT as a high school test, even if your state uses a different assessment, under certain conditions.”
Unlike the two provisions discussed above, this one looks like a good development for advanced students. These tests have relatively high ceilings, which will provide evidence about the performance of our best high school students (and, more importantly, allow for those data to be used in state accountability systems). And most advanced students will take these tests anyway, so a district pursuing this option would presumably lessen their testing burden.
A mixed bag
The authors of ESSA intended for gifted kids to be better served by American schools than they have been to this point, at least insofar as federal policy bears on their education. The shift toward decentralized decisions will allow states to boost their own policy guidelines if they choose—as we hope they will—to address excellence gaps.
But the way the department and its hand-picked negotiators are handling these issues in the draft regulations and issue papers strongly suggests that gifted education—and, more broadly, educational excellence—remains almost completely off its radar. Of the three ESSA provisions that matter most for high-achievers, the regulators are doing OK with locally selected high school assessments. But they're really messing up computer-adaptive tests and advanced eighth-grade math assessments—to the extent that they stand to actively harm high-achievers. Fortunately, this is just one step in the regulatory process, so we don’t have to wave the white flag yet. Advocates and policy makers still have time to get the word out and change these provisions before they’re finalized. That is exactly what we hope to do.
Jonathan Plucker is the inaugural Julian C. Stanley Professor of Talent Development at Johns Hopkins University and a member of the National Association for Gifted Children board of directors. Brandon Wright is the editorial director of the Thomas B. Fordham Institute.
Editor's note: This is part of a series of blog posts that is collaboratively published every week by the National Association for Gifted Children and the Thomas B. Fordham Institute. Each post in the series exists both here on Flypaper and on the NAGC Blog.
Ask any group of high school teachers, and they will report that the most frequently heard question in their classrooms is, “When are we ever gonna use this?” In a traditional college prep program, the honest answer is usually, “Maybe when you get to the university.” But in the real world? Depending on the class, maybe not at all.
However, in high-quality Career and Technical Education (CTE) programs, that question is moot. Students learn skills that will help them prepare for stable careers and success in a modern, global, and competitive economy. A student who wants a future in architecture doesn’t question his first drafting course in high school. One interested in aerospace sees value in her introduction to engineering design class. An aspiring medical professional is enthusiastic, not indifferent, about high school anatomy.
Unfortunately for millions of American students, CTE is not a meaningful part of their high school experience. Instead, they are shuffled through large, bureaucratized schools that do not adequately prepare them for anything, be it college, career, or both.
In large part, this is because CTE has been chronically neglected by American education leaders and policymakers. Many CTE advocates suspect that it’s because of the damaged “brand” of vocational education. And it’s damaged for a reason, as there was a time when the “vo-tech” track was a pathway to nowhere. “Tracking,” as practiced in the twentieth century, was pernicious. It sent a lot of kids—especially low-income and minority students—into low-paying, menial jobs, or worse.
Yet America is an anomaly. In most industrialized countries—nearly all of which outperform us on measures of academic achievement, such as PISA and TIMSS—students begin preparing for a career while still in high school. Around the world, CTE is not a track away from a successful adulthood, but rather a path towards it.
American students face a double-whammy: Not only do they lack access to high-quality secondary CTE, but then they are subject to a “bachelor’s degree or bust” mentality. And many do bust, dropping out of college with no degree, no work skills, no work experience, and a fair amount of debt. That’s a terrible way to begin adult life. We owe it to America’s students to prepare them for whatever comes after high school, not just academic programs at four-year universities.
Despite its checkered past, modern CTE—often called “new vocationalism”—is a far cry from vo-tech. No longer isolated “shop” classes for students showing little future promise, CTE coursework is now strategic and sequenced. It entails skill building for careers in fields like information technology, health sciences, and advanced manufacturing. Secondary CTE is meant to be a coherent pathway, started in high school, into authentic technical education options, and credentials, at the postsecondary level.
Why don’t we see more communities embracing high-quality CTE? Why are students nationwide taking fewer CTE courses today instead of more? Would it help if policymakers, educators, parents, and kids could see that CTE today isn’t a dead-end track?
That’s where Fordham’s new study Career and Technical Education in High School: Does It Improve Student Outcomes? comes in. We wanted to know whether the students who participated in CTE—and especially those “concentrating” by taking a sequence of three or more courses aligned to a career in a specific industry—were achieving better outcomes than their peers. Were they more likely to graduate from high school? Enroll in postsecondary education? And, perhaps most importantly, be employed and earn higher wages?
To find out, we enlisted Shaun M. Dougherty, assistant professor of educational policy and leadership at the University of Connecticut’s Neag School of Education, who has previously studied high school CTE in Massachusetts and New York City. For this study, he coordinated with the Arkansas Research Center to access and analyze their truly remarkable database, which combines secondary, postsecondary, and labor market information. He designed and executed a rigorous analytic strategy that uses three different statistical approaches, giving us great confidence in his findings.
And what are they?
Arkansas students with greater exposure to CTE are more likely to graduate, enroll in a two-year college, be employed, and have higher wages. Furthermore, those students are just as likely to pursue a four-year degree as their peers. In addition, students who “concentrate” their CTE coursework are more likely to graduate high school by 21 percentage points compared to otherwise similar students—a truly staggering number. Concentration has positive links with the other outcomes as well. Moreover, the results of this study suggest that CTE provides the greatest boost to the kids who may need it most—boys and students from low-income families.
And the good news is that CTE does not have to be super expensive and highly exclusive to have positive effects. The form of CTE we studied in Arkansas is CTE at its most egalitarian and scalable: most students took courses at their comprehensive high school, and some did so at regional technical centers. And it worked.
Overall, this study adds to the growing body of evidence on the impact of high school CTE. Policymakers in other states should heed Arkansas’s example by increasing their investment in secondary CTE that is aligned to the demands of the local labor market. It’s also high time to reauthorize the Perkins Act and increase federal investment in this area. The scars of the recession have faded, but they haven’t disappeared. Connecting more young people with available opportunities by giving them the skills employers are seeking should be a national priority.
Last week marked the beginning of the annual New York State English and math tests for grades 3–8. While Catholic schools (and their teachers’ unions) have largely stayed out of the political fray when it comes to standards and testing, we at the Partnership Schools—a network of six urban Catholic schools in Harlem and the South Bronx—voluntarily participate in the New York Common Core assessments.
Catholic schools have long been unapologetic supporters of high standards for all children, and we at the Partnership use results from the New York tests both to ensure that we are keeping expectations high for our students and to benchmark our students’ academic growth.
In an age when some people are opting out, we are opting in.
Of course, we’re aware of the pushback against standards and tests, particularly in our home state of New York. But we believe that pushback is misguided and that the opt-out movement is misleading parents. In particular, it is using tests as a scapegoat for implementation decisions that are mostly within the power of educators and education leaders to change.
As choice schools, we’re fortunate. Our parents—many of whom come from the nation’s poorest congressional district—opt into our schools. And they make sacrifices to do so, paying on average about $250 per month in tuition. Our school leaders and teachers—who are pillars of their communities and who are deeply committed to the mission of Catholic education—have deep respect for the families and communities we serve, and they have built tremendous trust with parents and students.
That’s why we take seriously any parent’s decision to opt out. As Catholic educators, we understand that parents are a child’s first teachers. We are partners with parents in their children’s education, and we take that partnership very seriously. We also understand that when a parent comes to us asking to opt out of a test, it’s because they believe that decision serves their child’s best interests.
Unfortunately, when New York State’s public school teachers’ unions have spent the past two years actively supporting a misinformation campaign about testing, even Catholic school parents are at risk of being misled. And the irony is not lost on those of us who work in Empire State Catholic schools that those special interests fighting any effort to expand school choice are only comfortable with “parent choice” if those choices serve their interests.
While very few of our parents opted out of the test last week, we did receive at least one form letter similar to those that have been crafted and distributed by union-backed organizations that are hostile to standardized testing. The letter included the kind of heated and hyperbolic rhetoric that is now common in education debates. It explained that standardized testing “is consuming a child’s academic year” and that it “forces [teachers] to ‘teach to test’ and takes the joy out of learning.”
From our perspective, even one student opting out is too many. So it’s important to set the record straight: In New York State, the English and math tests take up less than 1 percent of the total time a student spends in school. That’s hardly excessive.
And to be clear, tests don’t “force” anything. They measure. Moves to scrap core content instruction in favor of test preparation are leadership decisions, not policy decisions. Worse still, exchanging core content instruction for test preparation isn’t very effective (in English particularly). While test preparation may give a modest short-term boost in scores, it does very little to improve student reading comprehension over the long term.
That’s why in our schools, we say—and believe—that the best “test prep” we can offer our students is knowledge-rich instruction in the core content areas. And that’s why choosing the right curriculum, ensuring that our teachers have the resources and support they need—and giving them the flexibility to innovate when they have to meet their students’ needs—is the foundation of our school model.
But most important, independently developed standardized tests are essential to the broader education system. Recently, a Johns Hopkins University study found that “when evaluating a black student, white teachers expect significantly less academic success than black teachers,” and that “this is especially true for black boys.”
Moreover, “for black students, particularly black boys, having a non-black teacher in a tenth-grade subject made them much less likely to pursue that subject by enrolling in similar classes. This suggests biased expectations by teachers have long-term effects on student outcomes.”
This isn’t the first study to demonstrate that teachers often have different expectations for students of color (see here for another), and together, this research suggests the very real need for independent measures to ensure that all students are being held to the same bar regardless of race or socioeconomic status.
Of course, the biases revealed by these studies are often unconscious and undoubtedly unintentional; but that doesn’t make them any less real. And if we ignore them and eliminate standardized benchmarks of student learning, or if we rely on a system informed only by teacher-created tests and teacher-conferred grades, we could be intentionally systematizing the kind of unconscious bias that holds our most vulnerable children back.
The special interests behind New York’s well-funded opt-out campaign are determined to ignore these inconvenient facts. And in doing so, they help perpetuate an unequal education system and condemn the best tool we have to expose those inequalities.
It’s time to opt out of the overblown accusations and get back to the work of educating our kids.
Editor’s note: This article was originally published in a slightly different form at the Seventy Four.
It’s not really a surprise that the progress of school choice at the state level is so often tethered to the fortunes of Republican lawmakers. A number of Democratic interest groups (teachers’ unions chief among them, though they’re certainly not alone) have traditionally lined up against charter schools and voucher initiatives, and down-ballot officeholders have been slow to follow the lead of national figures (and charter fans) like Bill Clinton and Barack Obama. That’s why it’s so striking to observe this month’s developments in Maryland, where an overwhelmingly Democratic state legislature has teamed up with the GOP to carve out new funding for private school scholarships aimed at low-income students. It’s important to keep a sense of proportion; the initiative accounts for just $5 million out of a $42 billion state budget, and legislators rejected a far more ambitious proposal for private school tax credits. Still, the move is a major step forward for private schools of choice in the Old Line State.
With New York City authorities already facing serious questions about student safety, the country’s biggest school district must now address a class action lawsuit filed on behalf of students who have been the victims of bullying. The case, propelled forward by charter-friendly advocacy group Families for Excellent Schools, alleges that the New York City Department of Education violated state law by failing to address a wave of violence perpetrated both by school bullies and abusive teachers. It’s a tricky time for the issue to surface, since the district is still roiled by a spate of scary episodes involving firearms smuggled into school facilities. In response to media coverage of the incidents, Mayor Bill de Blasio has entreated the public to “put a lot of trust in” the NYPD, since “they’ve continually driven down crime in our schools.” This will be a critical case to watch, since it tracks the continued pivot of local choice organizations to the safety and discipline front; Familes for Excellent Schools is essentially a pure product of Eva Moskowitz, who has taken harsh swipes at what she perceives as the district’s too-lax disciplinary procedures. The policies at her Moskowitz’s Success Academy, meanwhile, have been criticized as draconian. It may be up for the courts to decide which vision will prevail.
Even though Denver’s contemptible football team supplanted a far more worthy AFC rival and triumphed in the most unwatchable Super Bowl ever played, its civic stock has actually risen over the past few years—in no small part due to Fordham’s commentary. We trumpeted the city’s virtues on school choice repeatedly in 2015, first in our report on district-charter relations and then in our authoritative ranking of “choice friendliness” in major American school districts. In the former, we happily proclaimed Denver to be “the country’s most comprehensive attempt to improve school quality by engaging charters as equal partners through a portfolio approach.” David Osborne has written a terrific new article for Education Next showcasing the fruits of that effort. Looking back on the dark days of 2005, when then-Superintendent (and present-day U.S. Senator) Michael Bennet was charged with repopulating half-empty district classrooms, the story chronicles the increasing cooperation between the district and charter sectors—and the phenomenal academic gains that have gone with it.
And while we’re focusing on cheery news, the good people of Nevada are breathing a sigh of relief this week over the uneventful start to their standardized testing season. The state’s computer-based Smarter Balanced assessments have proceeded smoothly thus far, with around 7 percent of eligible students in Clark County (the state’s largest, and home to three-quarters of its residents) completing exams over the past month. Though that figure stops somewhat short of mind-blowing, it’s a huge step forward from the farce that took place last year, when server problems prevented thousands of kids from logging in for the assessments and the entire testing schedule had to be revised for many schools. We’ll take progress any way we can get it.
On this week’s podcast, Mike Petrilli and Alyssa Schwenk refute the idea that CTE is at odds with college, critique draft ESSA regulations’ neglect of high-achievers, and discuss a New York City lawsuit alleging the city’s schools are unsafe. During the Research Minute, Amber Northern explains charter high schools’ effects on long-term attainment and earnings.
Tim R. Sass Ron W. Zimmer, Brian P. Gill, and T. Kevin Booker, " Charter High Schools' Effects on Long-Term Attainment and Earnings," Journal of Policy Analysis and Management (April 2016).
A new publication by Tim Sass and colleagues examines the effect of charter high schools on long-term attainment and earnings. The study builds on others by the same authors, as well as a working paper of the study released over two years ago.
The authors focus on charter high schools in Florida, where they can access a wealth of data from the state department of education’s longitudinal database. That information includes various demographic and achievement data for K–12 students, as well as data on students enrolled in community colleges and four-year universities inside and outside of Florida. (The latter info was gleaned from the National Student Clearinghouse and other sources, and employment outcomes and earnings are merged from another state database.)
The sample includes four cohorts of eighth-grade students; the first cohort enrolled in 1997–98, the last in 2000–01. They are able to observe labor outcomes for students up to twelve years removed from their eighth-grade year.
Before we get to the results, let’s address the biggest analytic hurdle to be overcome: selection bias—meaning that charter school students, by the very act of choosing an educational alternative, may be different in unobservable ways from those who attend traditional public schools (TPS). Indeed, the vast majority of the paper discusses not the findings but the various attempts to address this inherent issue in virtually all choice impact studies. Absent randomized lottery data, the authors limit their sample to students who were enrolled in charter schools in eighth grade, positing that they all possess similar unobserved characteristics. They then divide them into two groups: treatment students who enrolled in charter schools again in ninth grade and control students who switched to a TPS. The analysts also match students on observable baseline characteristics such as family income and eighth-grade test scores.
This is a very reasonable approach. But it is not without its flaws, as the analysts readily admit—especially since “back-end selection bias could occur through the comparison students’ choice to exit the charter sector after eighth grade.” By targeting a “typical transition year,” however, they reason that this bias is less significant than initial selection into the charter sector because nearly all are enrolling in a new school regardless of whether they are changing sectors.
And now for those results.
First, charter high school enrollment is positively linked to educational attainment. Specifically, there is a six-percentage-point increase in the likelihood of earning a high school diploma within five years and a nine-percentage-point increase in the likelihood of attending college. There is also a positive relationship between charter high school attendance and college persistence (defined as attending college at least one semester in consecutive years), with roughly a twelve-percentage-point boost for charter high school students. The latter also see the equivalent of a 12 percent increase in maximum earnings from age twenty-three to twenty-five (again, compared to students who attended charter middle school but transferred to a traditional high school).
The paper notes (as have others) that these results are particularly intriguing because prior studies have shown that charter schools in Florida have not impacted student test scores much. So, the logic goes, perhaps we shouldn’t pay much heed to test scores. Yet the literature is not clear on that point. (Even the study cited in the report as showing poor charter results finds that by year five, charters are on par with TPS in math and produce better reading achievement. Plus, that study does not target charter high schools, much less these charter high schools.)
Besides that, we can’t directly apply findings from one state to different charter schools in different states with different kids and different policies. (And quasi-experimental studies, especially, can’t remove every shred of bias due to unmeasured characteristics between treatment and control groups.)
Also remember that these are findings based on averages; yet kids attend specific charter schools. We simply don’t know if there are actual high schools that produce lackluster test scores but get impressive graduation, college completion, and earnings bumps.
So what’s the takeaway? First, there’s a growing body of evidence indicating that charter schools, along with Catholic schools, produce very strong real-world outcomes. Second, despite what others might imply, we should continue to judge high schools in part by their test scores—and shouldn’t shy away from shutting down the ones that, year after year, post dismal results for entire groups of kids.
SOURCE: Tim R. Sass, Ron W. Zimmer, Brian P. Gill, and T. Kevin Booker, “Charter High Schools’ Effects on Long-Term Attainment and Earnings,” Journal of Policy Analysis and Management (April 2016).
Credit recovery is education’s Faustian pact. We remain not very good at raising most students to respectable standards. But neither can we refuse to graduate boxcar numbers of kids who don’t measure up. Enter credit recovery, an opaque, impressionistic, and deeply unsatisfying method of merely declaring proficient getting at-risk kids back on track for graduation.
This pair of studies from the American Institutes for Research and the University of Chicago Consortium on School Research looks at more than 1,200 ninth graders in seventeen Chicago public schools who were enrolled in a credit recovery course the summer after failing algebra I a few years ago. Half took the class online, half in face-to-face classes. Providing credit recovery is now one of the most common purposes of online courses; but “evidence of the efficacy of online credit recovery is lacking,” the authors note with considerable understatement.
The first report analyzes the role of in-class mentors in online classrooms, examining whether students benefited from their additional instructional support. They did—kind of. The authors suggest that “instructionally supportive mentors” (those with subject matter expertise, not just a warm body providing “support”) lead to students navigating the course with greater depth and less breadth. They seem not to have considered that out-of-subject mentors may have simply helped in the only way they knew how—by encouraging students to push through more of the material. Regardless, whether students took the online class or the face-to-face version, “multiple measures of algebra learning were low for students in both the face-to-face and online classes, suggesting little evidence of content recovery in the context of their credit recovery courses.” Right.
The second brief takes a longer view, looking not only at how the students who failed algebra I performed in online versus face-to-face credit recovery classes, but also at the much more critical question of where these students stood a year later. A strong majority of both groups successfully “recovered” their lost credit (66 percent of online students versus 76 percent who took face-to-face courses), but students who took the online course generally found it to be more difficult. They also were more likely to have a negative attitude about math than students who took the live class (perhaps because it was more difficult?). Online students also posted significantly lower grades: In order to earn credit, students needed to earn a D or better. More than half (53 percent) of students in the face-to-face Chicago classes earned an A, B, or C, compared to only 31 percent of the online students. A whopping 36 percent of online learners—the biggest chunk—squeaked by with a D. That isn’t an entirely black mark on the online course and teachers, who may be more clear-eyed and less emotionally swayed by students who are not physically sitting in front of them. Critically—and this is the good news—the authors found that there were no significant differences between online and face-to-face students in how they performed in subsequent math classes or their likelihood of being on track for graduation a year later. It’s also the bad news.
For at-risk, low-achieving students, the pair of studies provides important cautions about online credit recovery—and the practice generally. The authors suggest that online credit recovery course models offering more instructional support and opportunities for remediation of earlier content would benefit at least some students who fail regular classes. But as the authors correctly observe, their work “raises questions” about the balance between remediation and rigor in credit recovery. “Instruction that matches students’ skill levels may not correspond well with the expectations for the content of the courses required for high school graduation or college readiness, especially within the constricted timeframe of a summer course.”
Our deal with the devil remains in full effect.
SOURCE: Jessica Heppen et al., “Getting Back on Track: Comparing the Effects of Online and Face-to-Face Credit Recovery in Algebra I,” American Institutes for Research (April 2016); and Suzanne Taylor et al., “Getting Back on Track: The Role of In-Person Instructional Support for Students Taking Online Credit Recovery,” American Institutes for Research (April 2016).
A new working paper by Calder analyzes whether federally funded school turnarounds in North Carolina have impacted student outcomes.
The study uses achievement, demographic and descriptive data about teachers and principals for K–8 schools in the 2010–2014 school years, as well as teacher survey data from North Carolina’s biannual Teacher Working Conditions Survey. The data set includes eighty-five elementary and middle schools that were subject to the state’s school turnaround program, which was funded by federal Race to the Top funds. Most schools used a “transformation model” of turnaround that required replacing the principal, along with other instructional interventions like increasing learning time (but no teacher terminations).
The analysis uses a regression discontinuity design, wherein assignment to the treatment and control groups are based on a school falling right above or below the cut point for placement into the turnaround program. The idea is that whether a school is just above or just below the cut is essentially random.
The key findings: The program had a mostly negative effect on test scores in math and reading—especially so in math. It decreased average attendance by between 0.4 and 1.2 percentage points in 2012 (the first full year after the program was implemented); resulted in a higher rate of suspensions in 2012; and led to a drop in school-wide passing rates in math and reading—more so for particular subgroups in each subject. There were also consistently large decreases (0.36–0.64 standard deviations) in reading scores for high-achievers. The programs had no effect on teachers’ perceptions of the quality of their schools’ leadership (recall most had new leaders); nor did it impact much their perceptions of the quality of professional development, which teachers got more of (they also had to complete more required paperwork). The program also led to an exodus of high-achieving students—which contributed to the decrease in scores, but was not the sole reason.
Pretty dismal stuff.
It is virtually impossible to identify why all of these negative outcomes occurred. But for starters: Was the new principal any better than the last? Perhaps turning over most of the staff would be useful. Perhaps it takes longer than three years to see impacts. Perhaps disadvantaged kids need wraparound services too. Perhaps a district-wide approach is better than a school approach. Perhaps closure is better still. Like most interventions in education, there is no one formula for success. And the black box is pretty dark inside.
SOURCE: Jennifer A. Heissel and Helen Ladd, "School Turnaround in North Carolina: A Regression Discontinuity Analysis," CALDER (March 2016).