It’s become fashionable in ed-policy circles to decry “misNAEPery,” coined by Mathematica’s Steven Glazerman and defined as the inappropriate use of data from the National Assessment of Educational Progress. It’s an important caution to us pundits and journalists not to make definitive declarations about what might be causing national or state test scores to rise or fall when we really have no idea of the true cause or causes.
But like all good things, the crusade against misNAEPery can be taken to extremes. It’s hard to use NAEP results to establish causation (more on that below), but NAEP scores and trends have great value and reveal much that’s important to know, and therefore the influence they wield is generally justified. In short, just because NAEP scores can be misused doesn’t mean they are useless.
As we look ahead to April’s release of the 2017 NAEP reading and math results for states and the nation, here are five reasons why policymakers, analysists, and educators should pay close attention:
- Tests like NAEP measure skills that are important in their own right. To quote President George W. Bush, “Is our children learning?” is still an essential question for our education system. What gross domestic product is to economics and employment rates are to labor policy, student achievement is to education. A fundamental mission of our schools is to teach young people to read, write, and compute; the whole country deserves to know how we’re doing. And while building these basic skills is not the only job of our K–12 system—or even of our elementary and middle schools, whose students’ performance is what we’ll see in forthcoming fourth grade and eighth grade results—they are surely at the center of the enterprise.
- Test scores are also related to long term student success. Test scores are not “just” test scores. They’re related to all manner of real-world outcomes that truly matter. Eric Hanushek and others have shown that countries that boost pupil achievement see stronger economic growth over time. Raj Chetty et al. have found that increased learning, as measured by tests, predicts earnings gains for students a decade later. When we see test scores rise significantly, as we did in the late 1990s and early 2000s, we are watching opportunities open up for young people nationwide. And when scores flatten or fall, it’s a warning that trouble lies ahead.
- NAEP is our most reliable measure of national progress in education or the lack thereof. Because NAEP is well-designed, well-respected, and zero-stakes for anybody, it is less susceptible to corruption than most other measures in education. Unlike high school graduation rates, NAEP’s standards can’t be manipulated. Unlike state tests, the assessments can’t be prepped for or gamed. Yes, a few states have been caught playing with exclusion rates for students with disabilities (we’re looking at you, Maryland), but by and large NAEP data can be trusted. In this day and age, that’s saying something. And while international tests like PISA and TIMSS give us critical insights about our national performance, NAEP is the only instrument that provides comparable state-by-state outcomes. Speaking of which…
- NAEP serves as an important check on state assessment results, helping to expose and perhaps deter the “honesty gap.” Way back in 2001, Congress mandated that every state participate in NAEP (in grades four and eight in reading and math) as an incentive for them to define “proficiency” on their own tests at appropriately challenging levels. Well that clearly didn’t work, as our Proficiency Illusion study showed in 2007, but eventually it put pressure on states to develop common standards and more rigorous assessments. The “honesty gap” between the lofty level of academic performance required for students to succeed in the real world and what state tests say is good enough has closed dramatically in recent years. But there’s an ever-present threat that it could open up again. NAEP’s audit function is essential.
- In the right hands, NAEP can be used to examine the impact of state policies on student achievement, helping us understand what works. It’s not true that NAEP can never be used to establish causal claims; it’s just that it’s hard. Well respected studies from the likes of Thomas Dee and Brian Jacob, for example, have shown the effects of NCLB-style accountability on student outcomes using NAEP data. Reporters should be skeptical of most of the claims, boasts, and lamentations that they will likely encounter on release day, but that doesn’t mean they should discount methodologically-rigorous studies by reputable scholars. And NAEP can also be leveraged to allow for cross-state comparisons using state testing data, as we see in recent studies from Sean Reardon and others.
MisNAEPery is a crime. So is NoNAEPery. Bring on the results!