For many parents and teachers, the Covid experience has confirmed at least two pieces of common sense: It’s hard for kids to learn if they’re not in school, and those who are in school tend to learn more. Yet in some communities, the crisis persists, thanks to one of the pandemic’s most pernicious effects: the surge and apparent normalization of student absenteeism, especially in the many low-income communities that have been slammed by the virus.
Nationally, one in four students was chronically absent in 2020, and it’s not over: 59 percent of Detroit students and nearly half of Los Angeles Unified students are on pace to fit that category in 2021–22. In many parts of America, enrollment data show tens of thousands of students to be simply missing, even after accounting for increases in charter, private, and homeschool enrollment.
A focus during the post-Covid education recovery phase, then, should be making sure that students return to school on a regular basis. Yet our systems for measuring their attendance—and holding schools accountable for getting kids back into classrooms—are woefully inadequate and antiquated.
Most jurisdictions rely exclusively on raw attendance rates and/or chronic-absenteeism rates, both of which are highly correlated with student demographics and other factors that schools generally cannot control. Yet such metrics are ubiquitous in state accountability systems, with at least thirty states and the District of Columbia having adopted student absenteeism, chronic absenteeism, or variants thereof as a “measure of school quality” under the Every Student Succeeds Act.
Moreover, many states and districts do a poor job of measuring attendance because they can’t (or choose not to) differentiate between full-day absences and partial-day ones (as in, when students show up for some classes but not for others). Prior research shows that partial-day absenteeism is rampant in secondary schools, mostly unexcused, and explains more missed classes than full-day absenteeism. Partial-day absences increase with each middle school grade and then rise dramatically in high school.
In short, the most widely adopted “fifth indicator” under ESSA has been framed in ways that are hopelessly broad and unfair. But why? After all, collecting detailed attendance data ought to be straightforward in the era of smart phones and Snapchats. And nothing prevents states from designing more sophisticated attendance and absenteeism gauges, as they already do when it comes to test scores. Just as “value-added” calculations derived from those scores help parents and policymakers understand schools’ impact on students’ achievement, so too might a measure of schools’ “attendance value-added” complement raw attendance or chronic-absenteeism rates by highlighting schools’ actual impact on attendance—after taking into account students’ preexisting characteristics and behavior. Such an approach is also fairer, as it doesn’t reward or penalize schools based on the particular students they serve. But what’s more important, in our view, is that if “attendance value-added” were baked into accountability systems, it might encourage more schools and districts to embrace changes that actually boost attendance—which, of course, is the whole point!
For instance, some schools form task forces to closely monitor attendance so as to catch problems early, such that three absences might raise a warning flag that triggers a parent phone call. They make home visits if parents can’t be reached by email or phone. They refer students with frequent absences to the school counselor or social worker for case management and counseling. They establish homeroom periods in high school, where students remain with the same teacher all four years so that they form relationships, making it easier for the educator to monitor and discuss attendance with pupil and family.
We wanted to know whether these types of efforts might be isolated and reliably measured such that schools get credit (or don’t) for making them. Thus Fordham’s new study, Imperfect Attendance: Toward a fairer measure of student absenteeism, by Jing Liu, assistant professor of education at the University of Maryland and the author of numerous studies on the causes and effects of student absenteeism. Liu leveraged sixteen years of administrative data (2002–03 to 2017–18) from a large and demographically diverse district in California where the attendance information included data on partial-day absences. It’s worth your time to read the (fairly short) study and Jing’s policy implications, but for those in a rush, here are four key findings:
- Conventional student-absenteeism measures, including chronic-absenteeism rates, tell us almost nothing about a high school’s impact on student attendance.
- Like test-based value-added, attendance value-added varies widely between schools and is highly stable over time.
- There is suggestive evidence that attendance value-added and test-based value-added capture different dimensions of school quality.
- Attendance value-added is positively correlated with students’ perceptions of school climate—in particular, with the belief that school is safe and behavioral expectations are clear.
There’s much to unpack here, but to us, four takeaways merit attention.
First, better attendance measures could help students and families make better decisions.
On average, attending a high school with high attendance value-added increases a student’s attendance by twenty-eight class periods (or roughly four school days) per year. And there is suggestive evidence that high schools that do an above-average job of boosting attendance also boost postsecondary enrollment—even if the school’s test-based value-added is middling.
That is to say, just as there are high-test-value-added but low-achievement schools that help students succeed, there are high-attendance-value-added but low-raw-attendance schools that also do this.
Many low-income parents face a choice between two schools with similar achievement and attendance patterns but with value-added scores that vary widely. Helping them to understand and act upon those distinctions is essential. We want parents to choose schools that are “beating the odds,” and value-added measures are one good way to identify these schools.
Second, better attendance measures have real-world implications for educators.
One of the reasons for measuring a school’s impact on attendance is to be able to hold school staff accountable for what’s under their control. Plus, we want to encourage behavior that will make it likelier that students will come to school so they can learn more. Value-added measures are the best way to do that. Simply put, attendance value-added differentiates between high-poverty schools that deserve to be lauded and those that demand intervention.
Likewise, we need to worry about discouraging teachers and principals who choose to work in high-poverty schools and who may be getting unfairly penalized when, in reality, they are making progress in improving student attendance, even if the “status” measures remain unsatisfactory.
All that said, this study is the first to explore the feasibility of attendance value-added, and we need other researchers to test the measure empirically—with larger samples and in other locales—before it’s ready for prime time. What’s more, though the study undoubtedly demonstrates the promise of attendance value-added, it also underscores the strength and utility of test-based value-added measures—and why we’d be foolish to move away from them.
Third, more information is better.
The message from this study isn’t that schools should stop reporting raw attendance rates and chronic absenteeism. Instead, a both/and rather than either/or approach is the right choice. In fact, we’d go so far as to suggest that, just as some states have two grades for schools based on test scores (one for achievement and one for growth), we should consider having two measures of attendance (chronic absenteeism and “value-added” measures, once they’re vetted).
In general, status measures and growth measures are apples and oranges, so it doesn’t make sense to average or aggregate them. The simplicity and usefulness of a single, summative grade is lost if it doesn’t serve any one purpose well.
For instance, if the purpose is to decide whether to renew a school’s charter for the next five years, that decision should rest on growth-based test-score measures. But if the purpose is to understand whether students are ready for college and career, status-based measures are best. Each tells us something different. So it is with chronic-absenteeism rates and attendance value-added.
Finally, school safety matters when it comes to student attendance.
With wonky empirical studies such as this one, practitioners understandably ask, “What do these study findings imply for my work in real schools and classrooms?”
Although we hesitate to rely too heavily on correlational evidence, student-survey data consistently show that the strongest links to attendance value-added have to do with students’ sense of safety at school and their perception that the rules and behavioral expectations are clear.
In other words, staff who earnestly want to improve attendance rates should be mindful that safe schools and coming to school go hand in hand.
—
Cultivating a positive school culture—one that prioritizes student engagement, safety, and high expectations—is a key piece of the attendance puzzle. But so is developing a novel way to isolate and measure a school’s impact on attendance so that the efforts (or lack thereof) of those who work there can be made visible.
To repeat, one in four American students was chronically absent in 2020, up from one out of six in 2017–18. Thankfully, buildings have now reopened, but it’s past time to get all our kids back in school.