OK, everyone, back away from the ledge. With the release of NAEP data this week, the predictable deluge of commentary is well underway—mainly of the gnashing-of-teeth, rending-of-garments variety. NAEP may be the nation’s report card, but it is also the nation’s Rorschach test. Perception is in the eye of the beholder, and many see darkness and misery: “A Decade of Academic Progress Halts,” says the Los Angeles Times. “Student Score in Reading and Math Drop,” says U.S. News & World Report.
One of the frequent criticisms of NAEP punditry is “misNAEPery”—the sin of attributing fluctuations to particular policies, for example. One particularly virulent form of this fallacy—failure to account demographic changes in states over time—has become slightly less tenable this week, courtesy of this illuminating analysis by Matthew Chingos of the Urban Institute.
Not every state is the same. States with higher concentrations of black and Hispanic children, low-income families, and English language learners (ELLs) have a harder time rising to the top because they have more students mired at the bottom. But when you adjust for these demographic realities, a different NAEP emerges. There’s Massachusetts, still sitting pretty atop the tables. But Texas and Florida, states with higher concentrations of traditionally low-performing subgroups, suddenly look like standouts. Well-regarded high-fliers like New Hampshire, Minnesota, and Vermont begin to look fairly pedestrian, while curve-busting states like Mississippi, Louisiana, and New Mexico—though still low in absolute terms—strongly outperform their peers given where you’d expect states with their particular demographic mix to land.
Perhaps the bigger piece of news, and one that should give cheer in the face of the dismal NAEP results, is this: Between 2003 and 2013, NAEP scores in all fifty states have increased more than would be expected based on demographic shifts over that time period. In other words, with low-income kids and other challenging demographic groups making up an increasing share of the country’s K–12 population, it’s been great and surprising news that NAEP scores have increased. All other things being equal, we should have expected them to go down.
Although the adjusted figures allow for more meaningful comparisons between states, “they are still unlikely to reveal the causal impact of particular education policies,” Chingos warns. “There remain a number of unmeasured student characteristics and differences in non-education policies that could shed further light on these data,” he observes.
The report was published in anticipation of this week’s NAEP release; Chingos has rerun some of the data on the Urban Institute’s website. In all, it’s a fascinating contribution that makes the sin of misNAEPery slightly harder to commit. Or abide.
Now that we’ve seen NAEP through this new and pertinent set of lenses, I’m going to wait before opting to defenestrate. I suggest you do the same.
SOURCE: Matthew Chingos, “Breaking The Curve: Promises And Pitfalls Of Using NAEP Data To Assess The State Role In Student Achievement,” Urban Institute (October 2015).