Previous research has found that oversubscribed urban charter schools produce large academic gains for their students. But are these results related to test score inflation, defined by one assessment expert as “increases in scores that do not signal a commensurate increase in proficiency in the domain of interest”? To explore this question, a recent study examines state testing data from 2006 to 2011 at nine Boston middle school charters with lottery-based admissions. By exploiting the random nature of the lottery system, prior studies have found that these schools produce substantial learning gains on the Massachusetts Comprehensive Assessment System (MCAS).
To carry out the analysis, author Sarah Cohodes breaks down the learning gains by the various components of the state assessment—akin to how one might disaggregate overall gains by student subgroup. For example, a math assessment contains several different testing domains (e.g., geometry versus statistics), with some topics being tested more frequently than others. The hypothesis is as follows: If the gains are attributable to score inflation, we might expect to see stronger results on frequently tested items relative to obscure ones. In line with their incentives, teachers might strategically focus instruction on items with the highest odds of appearing on the exam, thus inflating scores. This is a possibility; as the author notes, a case study of Boston charters revealed that “teachers use publicly available MCAS items from prior years...and teachers constantly track their students’ progress on content that is tested.”
The study finds that the Boston charter school effect is dispersed evenly across the test items. In sixth to eighth grade math, charter students enjoyed a 0.25–0.35 standard deviation gain along all five topics: geometry; measurement; number sense and operations; patterns, algebra, and relations; and data analysis, statistics, and probability. Comparable results emerged across the two topics within the English language arts exam (reading, as well as language and literature). Cohodes also conducts a few variations of the analysis, including one that assigns test items to their relevant academic standard and then determines how frequently the standard appears on exams. She finds that charter students make gains of similar magnitude, regardless of how rare or common the tested standard is. Finally, the analysis considers whether math and ELA results are different than science, a lower-stakes test, and the gains are of comparable size.
The study cannot prove that test preparation is not occurring at all in these charter schools—the teachers could be very effective at preparing students along the entire spectrum of assessed topics. But it does appear that high-performing charters are not inappropriately gaming the test by focusing on a narrow set of frequently tested items at the expense of others. Perhaps this study can defuse some of the finger pointing aimed at high-performing charters and refocus our attention on learning how the finest charter schools “teach to the student.”
SOURCE: Sarah Cohodes, “Teaching to the Student: Charter School Effectiveness in Spite of Perverse Incentives,” Education Finance and Policy (Winter 2016): 1–42.