Previous literature on school quality and teacher quality largely assumes that good schools and good teachers are beneficial for all enrolled children, which means that a school’s “value added” is typically calculated as the average effect on students. A recent study from Matthew Naven of Washington and Lee University asks whether schools can be more effective for a specific type of student. It’s a wonky read, but is a worthy wrinkle (say that five times) to supplement our understanding of school quality.
The study uses individual-level data on California public school students spanning 2002 through 2013, data that are linked to postsecondary records. Naven focuses on students in grades four to eleven since there are no prior test score data for younger pupils, and he targets English language arts since students are tracked to different non-comparable math tests starting in grade seven. He calculates standard value-added models first—models that control for prior test scores and student demographics—and then extends that analysis via use of “drift methodology” as honed by Raj Chetty and Jonah Rockoff. The latter allows value added to change from year to year, which is important because schools experience things like staff turnover that could impact their quality in any given year.
The drift method also includes controls for three additional factors: the characteristics of the neighborhood in which a student lives (based on Census tract data from the American Community Survey); the prior test scores and college enrollment status of a student’s older sibling(s) living at the same address; and the effects of a child’s peers (including how their varying ability levels influence classmates). Naven uses this enhanced model to calculate value added for each school’s low- and high-socioeconomic-status (SES) students (meaning those living in a household with a median income of roughly $60,000 versus $100,000); minority and non-minority students (the latter are white and Asian); and male and female students.
What do the various calculations reveal? The standard model shows that, on average, schools provide less value (in terms of boosting test scores) to their low-SES, minority, and male students. But after controlling for the income level of neighborhoods, the outcomes of older siblings, and the effects of peers, the magnitude of within-school differences diminishes. Specifically, relative to test scores, schools add similar value when it comes to educating low- and high-SES students and minority and non-minority students, but add slightly more value when educating females (within-school differences account for 6 percent of the gender gap in test scores). Turning to postsecondary enrollment, the pattern is the same but larger: Within-school differences in quality account for 22 percent of the difference in college enrollment between men and women.
Two takeaways. The first is for state-level folks who are still committed to getting value-added models right (we see you out there!). Don’t assume that a school has the same impact on all types of students enrolled in it. Investigate it. Because, depending on how your accountability system is structured, it may generate school growth measures based more on the type of students enrolled than on true growth. That’s a real problem that needs to be addressed and one that the right growth measures can help avoid (along with better controls and smart use of them).
Second, schools seem to be better at increasing the chances that women will attend college. That’s good for our young women. But they need to be equally good at boosting the chances of men. Because, barring life under the proverbial rock, everyone knows that our young men need ample help on the education front.
SOURCE: Matthew Naven, Within-School Heterogeneity in Quality: Do Schools Provide Equal Value Added to All Students? Annenberg Institute at Brown University (May 2023).