The National Assessment of Educational Progress (NAEP) is indisputably the country’s most valuable tool for tracking student achievement over time, and it’s become ever more valuable as it has added subjects (nine of them now), boosted its frequency (at least in reading and math), reported results at the state level (and, for twenty pioneering cities, at the local level, too), and persevered with a trio of “achievement levels” (basic, proficient, advanced) that today are the closest thing we have to national academic standards.
Should the NAEP be more than a thermometer for the nation's academic progress? Photo by Joe Seggolia |
But NAEP only reports how our kids (and subgroups of kids, political jurisdictions, etc.) are doing. It doesn’t explain why. And in an era when achievement is barely ticking upward, despite America’s forceful efforts to reform the system so that it will soar, it’s no surprise that NAEP’s governing board, vigorously chaired nowadays by former Massachusetts Education Commissioner (and Fordham trustee) David Driscoll, wants this well-regarded assessment apparatus to become more useful in diagnosing what is and isn’t working and why.
One route to enhanced utility is to deepen and widen the “background questions” that NAEP asks of students, teachers, and principals in conjunction with the assessment. There are scads of these, varied (to some extent) by subject and grade level. (You can find the entire set of questionnaires and can see some examples [eighth graders, reading] at the NAEP website.) Today, however, the data arising from such questions are mainly used to probe deeper into the test results by revealing interesting relationships and tantalizing correlations. Turn, for example, to the new science report card and look at pages ten and eleven for examples of insights gleaned from teacher and student questionnaires.
Wanting to make NAEP more useful at the policy level, the Governing Board assembled a six-member panel to examine the uses of background questions. Chaired by former Deputy Education Secretary Marshall (Mike) Smith, its forty-two-page report was submitted to NAGB on March 2. (If you’re pressed for time, an executive summary is available, too.)
The panel’s main message is that NAGB should make its background questions more extensive and then use them more creatively to interpret the achievement data:
NAEP should restore and improve upon its earlier practice of making much greater use of background data, but do so in a more sound and research-supported way. With proper attention, these data could provide rich insights into a wide range of important issues about the nature and quality of American primary and secondary education including:
-
Describing the resources available to support learning (opportunity-to-learn) for students with differing home backgrounds and over time.
-
Tracking progress in implementing key instructional, curricular, and technological changes and educational policy initiatives, such as the Common Core standards.
-
Monitoring student motivation and out-of-school learning as research-based factors affecting student achievement.
-
Benchmarking high-performing states and urban districts and those with high achievement growth to identify factors that differentiate high-performers from lower-performers on NAEP.
This domestic effort would parallel the extensive reporting of background variables in PISA (Program for International Student Assessment) and TIMSS (Trends in International Mathematics and Science Study) that have become starting points for U.S. international benchmarking analyses to describe the characteristics of high-performing and low-performing education systems.
NAEP only reports how our kids (and subgroups of kids, political jurisdictions, etc.) are doing. It doesn’t explain why.
NAGB recently solicited feedback and comment on these recommendations, and what’s come back to them has been mixed, even contentious. Says former Commissioner of Education Statistics Mark Schneider, for example, based on both his professional judgment and at least one painful bout during his time in government, "They will make statements that will inevitably push the boundaries, and you will end up with questionable reports.”
Former IES Director Russ Whitehurst and current Commissioner of Education Statistics Jack Buckley have also expressed misgivings about the thrust of the panel’s report.
As have I. While the panel’s advice includes meritorious suggestions for improving the quality, validity, and consistency of NAEP background questions, for NAEP or NAGB to wander into student motivation and the efficacy of touchy policy innovations is to tread on very dangerous ground.
Over the past quarter century, every time that NAEP has strayed in the direction of “explaining” or “evaluating” rather than simply reporting, it has gotten into deep doo-doo, and this will surely happen with even greater force in the “tea party” era. NAEP is a thermometer, not a diagnostician. The temperature chart needs to be accurate, of course, and other factors that may influence it need to be described with as much precision as can be mustered (e.g., race, gender, socio-economic status). But look what happened in 2006 when Schneider unveiled an “evaluation” of charter-school performance using NAEP data. Big mistake, as he immediately recognized. (He notes that this was a project he inherited, not one he initiated!)
The same sort of backlash will occur with greater force and damage today when things have become so politicized and issues like the Common Core have become so controversial.
There will also be challenges involving privacy and objections to NAEP poking into issues that are “none of the government’s business.”
What a mistake it would be to risk turning NAEP into a completely different kind of instrument.
Indeed, it was less than a decade ago that NAGB retreated from what had been a more-ambitious set of background questions. And there was a reason for this pull-back. Particularly as the assessment’s administration became more frequent, the ever-lengthening list of background questions was becoming burdensome. It was also becoming intrusive and people grumped to Congress that NAEP was invading their privacy. As a consequence, the No Child Left Behind act, while adding a great deal to NAEP’s responsibilities and role in monitoring achievement results, also barred it from asking about "personal or family beliefs and attitudes." Congress further insisted that all questions be "secular, neutral, and nonideological."
It’s hard to picture the current Congress welcoming a more aggressive posture by NAEP on background questions and harder still to see Congressmen applauding the use of “neutral” NAEP data to track and evaluate the impact of such touchy “policy initiatives” as the Common Core standards. Indeed, I’m pretty sure NAGB would get its hand slapped. What’s more, that kind of extra work costs money and the administration’s budget request for NAEP for FY 2013 is $5 million less than was sought (and appropriated) for 2012 ($129.6 million). Thus the thermometer may already be in some jeopardy. What a mistake it would be to risk turning it into a completely different kind of instrument.
The National Assessment’s one crucial role over the next decade is to be a trustworthy thermometer. Emulating PISA and the OECD (with their dubious, controversial, and ill-supported policy pronouncements) would gravely jeopardize the integrity, respect, and acceptance of NAEP as “the nation’s report card”—neutral, trustworthy, nonpartisan, etc. Someday, perhaps, it can be more daring. Today, however, the country needs it to keep playing its present role.