One of the tougher accountability nuts to crack is how to gauge educational quality in early elementary grades. Federal education law does not require state exams until third grade, and states choose not to administer end-of-year assessments in grades K–2. Despite the importance of these formative years in children’s lives, the absence of standardized testing makes measuring their academic growth difficult.
Some early-childhood education analysts have proposed that states rely on classroom observation or attendance data to evaluate quality. But a number of states, including Ohio, do require assessment when students enter kindergarten. The Ohio assessment, for example, includes content across four domains: social foundations, math, language and literacy, and physical well-being and motor development. A child’s teacher administers the assessment. The results have largely diagnostic aims—to inform instruction for the coming year—and are not widely used in accountability systems. Could states use these baseline data, combined with later state exam scores, to measure growth?
A team of analysts from Mathematica recently explore this possibility using data from Maryland, a state that administers a Kindergarten Readiness Assessment (KRA). They examine student-level data from the first cohort of children taking the KRA in 2014–15 along with their third grade test scores from 2017–18. Approximately 54,000 students had scores on both assessments and are thus included in the analysis. Another 26,000 students, however, are excluded because they were missing either KRA or third grade scores (due mainly to exits from or entrances into the school system after Kindergarten).
The research team first demonstrates that academic growth can indeed be measured using KRA and third grade scores. Relying on the same methodology that the state uses for accountability in higher grades—known as “student growth percentiles”—they calculate K–3 growth results for Maryland elementary schools.
However, their analysis raises a few concerns about the validity of the results—the extent to which they reflect schools’ true contributions to student growth. First, due to the significant time between assessment, a number of students switched schools within the Maryland system. To address mobility, the analysts apportion responsibility for growth based on the amount of time spent in each school. This “shared accountability” is sensible, but without annual testing, uncertainty remains about which school actually contributed more to transfer students’ growth. Second, the researchers discover only a modest correlation between KRA and third-grade test scores. This, they suggest, indicates that the two assessments may be “measuring different aspects of academic ability.” In comparison to correlations in higher grades—e.g., third and sixth grade test scores—the KRA-third grade correlation is weaker, leading the authors to conclude that the K–3 growth results are “likely less valid” than those calculated in the higher grades.
Though imperfect, a K–3 growth measure may be better than flying nearly blind about educational quality. And a measure similar to what is used in this report could be superior to Ohio’s well-intended but rudimentary approach to measuring growth in the early grades. At the same time, policymakers should heed the report’s suggestions about implementing a K–3 growth measure: States should either place less weight on the results in an accountability system or report the growth data but not use them to inform ratings or consequences. Sound advice, given the limitations of such a measure.
Source: Lisa Dragoset, et. al., Measuring School Performance for Early Elementary Grades in Maryland, REL Mid-Atlantic (2019).