In the last month, two reports have renewed questions about the current direction of states’ high school assessments. The first, End-of-Course Exams and Student Outcomes, from Fordham’s Adam Tyner and Lafayette College’s Matthew Larsen, finds that end-of-course assessments (EOCs)—tests designed around specific high school courses, like algebra II or biology—are correlated with higher high school graduation rates and college admissions test scores. Despite these positive effects, however, Tyner and Larsen also find that EOCs may be slipping in popularity among states. The second report, The New Testing Landscape: How State Assessments are Changing under the Federal Every Student Succeeds Act, by Lynn Olson of FutureEd, reveals one of the reasons why: the rise of the ACT or SAT as states’ primary high school assessments.
ESSA opened new possibilities for states in how to best assess high school students, many of which put EOCs at a disadvantage. Most notably, ESSA included flexibility for states to permit districts to choose a nationally recognized high school assessment, like the SAT or ACT, instead of the state test, so long as the nationally recognized tests met key quality standards. So far only a few states like Oklahoma and West Virginia have taken advantage of the option—likely due to the complications of using results from two different assessments for accountability and reporting. However, the explicit reference in ESSA and its related regulations to college entrance or placement tests could be perceived as giving the SAT and ACT the imprimatur of the federal government—a perception that’s becoming reality as the U.S. Department of Education’s (ED) peer review of state assessments found that the ACT “substantially” met federal requirements in at least one state (so far, the SAT has “partially” met them).
What’s more, it’s easier to demonstrate compliance with ESSA using a high school assessment given in a single grade. When I worked at ED, states using EOCs were often caught off-guard when asked which test they were using to meet federal requirements. For example, one state wanted to use all three of its math EOCs, arguing that—together—they covered the entirety of the state’s standards and best gauged student readiness for college and career. While that was a logical claim, it ran afoul of federal law because only some, not all, students were assessed on all three EOCs.
Thus, one of the main advantages of EOCs—that they are taken whenever a student is enrolled in a course and better reflect what they are learning in the classroom—became a liability. In the end, most states defaulted to using the first EOC in the sequence as their federal test (e.g., algebra I) because it was the only one all students took statewide. In turn, this highlighted states that were choosing to administer more tests than federal law required—a tough argument at a time when the opt-out movement was in full swing, especially among high schoolers who were focused on Advanced Placement and college admissions tests (an oft-cited reason states switched to the ACT or SAT).
On top of anti-testing sentiment, budget constraints pushed states toward tests like the ACT and SAT and away from EOCs. ESSA did not increase the authorized level of funding for assessments and, in fact, caps formula funding for state assessments at $369.1 million. Given that states must administer general assessments and alternate assessments for students with significant cognitive disabilities in three subjects, as well as English language proficiency tests, federal dollars do not go far enough to cover costs to maintain current testing programs, let alone the costs of upgrading assessments to use new technology, include innovative items, produce results faster, incorporate new accessibility features, or meet other priorities. With funds stretched thin, states face pressure to keep testing costs low—and reducing the number of assessments administered in high school or going with an off-the-shelf assessment like ACT or SAT is an easy solution, even if experts call into question whether these assessments are fully aligned to state standards or have adequate accommodations for students with disabilities and English learners.
That’s not to say the price tag of the ACT or SAT is its only upside. When a state uses the ACT or SAT to meet federal requirements, it provides the assessment free-of-charge to all students—which could increase college enrollment, especially among historically underserved students who may not take the test otherwise. Switching to the ACT or SAT can also quell testing backlash from those supporting the opt-out movement. The tests are typically shorter than many states have used in the past, especially when you consider cumulative time required to take multiple EOCs, and it is nearly impossible to argue that college admissions tests are irrelevant or have no benefit for students. Research from ACT, the College Board, and others also finds that test results, alone or in combination with other indicators, are predictive of student success in postsecondary education—though not as strong or consistent a predictor as high school GPA.
This is where Fordham and other proponents of EOCs should think about additional research—and advocacy. EOC proponents need to demonstrate that, despite the cost and time associated with EOCs, these assessments come with many of the benefits of the SAT or ACT without the pitfalls related to standards alignment and accessibility. While it is encouraging that Tyner and Larsen found a positive relationship between the number of EOCs a state offered and student performance on the SAT or ACT, more compelling still would be research that EOCs are associated with positive postsecondary outcomes, including college enrollment, persistence, and credit accumulation—one of the core arguments lobbied in favor of using the SAT or ACT as a statewide assessment.
Similarly, research on any differential effects of EOCs based on how they are used is needed to better make the case for EOCs to policymakers. One reason the SAT and ACT are popular is the tangible benefit of college admission for students from taking the test. Such benefits could also be afforded from EOCs—and some states have tried to do so by incorporating EOCs into student course grades and GPAs, using results as a way for students to test out of college remediation, or requiring students to pass EOCs to graduate, i.e., using them as high school exit exams. Although Fordham’s study found that EOCs, in general, were related to higher graduation rates, the EOCs in the study were used in a variety of ways for accountability. Given past research showing exit exams decrease graduation rates—in contrast with Tyner and Larsen’s findings—additional research on whether the effect of EOCs on graduation and other outcomes varies depending on the type of stakes attached to them would assist policymakers in determining the right incentives to place on EOC results.
EOC exams may never carry the same cachet or brand-name recognition that the SAT or ACT enjoy with college admissions officers and families, but they may not need to if advocates can demonstrate they provide other, meaningful benefits for student learning and postsecondary success.