Education reformers in the United States have stumbled when it comes to high schools and the achievement evidence shows it. National Assessment results in grade twelve have been flat for a very long time. ACT and SAT scores are flat. U.S. results on PISA and TIMSS are essentially flat. College remediation rates—and dropout rates—remain high. Advanced Placement (AP) participation is up, but success on AP exams is not—and for minority students it’s down. And while high school graduation rates are up—and it’s indisputably a good thing for young people to acquire that credential—it’s not so good when there’s reason to believe it does not signify levels of learning that augur success in post-high-school pursuits.
We at the Fordham Institute have a longstanding interest in strengthening student achievement and school performance, and it’s no secret that we’re accountability hawks: We believe strongly that results—and growth in results—are what matter in education, and we’ve been concerned for some time about ways in which the appearance or assertion of improvement may conceal something far more disappointing. In that connection, previous Fordham studies have unmasked what we termed the “proficiency illusion,” the “accountability illusion,” the rise of often-questionable “credit recovery,” and the discrepancy between teacher-conferred grades and student performance on statewide assessments.
On the upside, we’ve also documented respectable—and authentic—achievement gains in the early grades, particularly among disadvantaged and low-achieving youngsters and children of color. But high schools, as we’ve noted on multiple occasions, remain a huge challenge.
Nor have federal efforts to strengthen academic performance via school accountability ever gotten much traction at the high school level, where—under No Child Left Behind and now the Every Student Succeeds Act—there’s been more emphasis on graduation rates than on student achievement. To their credit, most states, at one point or another, have supplemented those efforts by instituting their own exam-based requirements for students before awarding diplomas. These have taken the form of multisubject graduation tests—the best known probably being the Massachusetts MCAS exam—as well as subject-specific end-of-course exams (EOCs).
Both were extensively used until just a few years ago. At their high-water mark, graduation tests were required by thirty states and EOCs were employed by thirty jurisdictions (there’s double counting there, as the two types of tests overlap somewhat). Both, however, are now in decline. For the class of 2020, students in just twelve states will have taken a graduation test, and in twenty-six states, students will have taken one or more EOCs.
Three factors seem to have driven that decline: the overriding push for higher graduation rates, which militates against anything that might get in the way; the nationwide backlash against testing in general; and a handful of studies indicating that requiring students to pass a graduation test may discourage them and lead to more dropouts, which is obviously bad for them and would also depress the graduation rate without much evidence of a positive impact on student achievement.
Yet very little prior research has looked at EOCs in particular. Our new report, End-of-Course Exams and Student Outcomes, helps remedy that. We wondered: How, exactly, do states employ EOCs? And what difference, if any, do they make for student achievement and graduation rates? If they cause more harm than good, states may be right to downplay or discard them. If, on the other hand—and unlike graduation exams—they do good things for kids or schools, it’s possible that states, in turning away from EOCs, are throwing a healthy baby out with the testing bathwater.
We entrusted this inquiry to Fordham’s own Adam Tyner and Lafayette College economist Matthew Larsen, and they’ve done a first-rate job, the more so considering how challenging it is to corral EOCs separately from other forms of testing, how tricky it is to determine exactly what a test is being “used for,” and how many different tests and states are involved and over such a long period of time. It’s also a big problem that the nation lacks a reliable gauge of state-by-state achievement at the twelfth-grade level—a challenge that the National Assessment Governing Board recently promised to address, but not until 2027!
Tyner and Larsen learned much that’s worth knowing and sharing because the implications for state (and district and school) policy and practice are potentially quite valuable. Probably most important, EOCs, properly deployed, have positive (albeit modest) academic benefits and do so without causing kids to drop out or graduation rates to falter. “In other words,” write the authors, “the key argument against exit exams—that they depress graduation rates—does not hold for EOCs.” Instead, these exams “are generally positively correlated with high school graduation rates.” Better still, “The more EOCs a state administers, the better is student performance on college-entrance exams, suggesting that the positive effects of EOCs may be cumulative.”
Nor are those the only potential benefits associated with strategic deployment of EOCs. External exams are a good way for states to maintain uniform content and rigor in core high school courses and keep a check on the local impulse (often driven as much by parents as by teachers or administrators) to inflate student grades. At the same time, EOCs can motivate students to take those courses more seriously and tend to place teachers and their pupils on the “same team”—for when the exam is external, the teacher becomes more coach than judge.
Such exams also lend themselves to an individualized, “mastery”-based education system in which students proceed through their coursework at their own speed, often with the help of technology as well as teachers. (To optimize this benefit, “end-of-unit” exams would be even more beneficial than the kind given only at the end of a semester or a year.)
We’re surely not suggesting that states go crazy with EOCs—there’s little danger of that happening in today’s climate anyway—but we do suggest that policymakers take seriously both the good that these exams can do and the potential harm from scrapping or softening them. And softening seems to be underway in more and more places, as states create detours around EOCs for kids who have trouble passing them, delay the year when they must actually be passed, or turn them into part of a student’s course grade rather than actually requiring that kids pass them.
As we said, we’re accountability hawks and thus generally opposed to softening. Yet as Tyner and Larsen note, EOCs have the virtue of flexibility. States can deploy them in various ways: some firmer, some softer, and some simply as a source of valuable information for teachers, parents, school leaders, and policymakers. At a time when states are back in the driver’s seat on school and student accountability, that’s mostly a good thing. But at a time when high school performance is flat, flat, flat, it seems to us that wise educators and policymakers alike should use every tool in their toolbox to build the scaffolding for major improvement. EOCs are such a tool.