Chad Adelman, Education Sector's new policy associate, digs into our high-achieving students study and thinks he's found a smoking gun. In particular, he has a beef with us looking at National Assessment of Educational Progress scores since 2000, instead of 2003:
But No Child wasn't signed into law until January 2002. The first NAEP tests measuring its true impacts could not have been until 2003, represented by the dotted line.When we make this correction, the claims in the report do not seem to stand up as well. The lower tenth of performers made gains throughout the chart, but especially from 2000 to 2003, where they gained 13 points to their high achieving peers' six. Notably, this accounts for almost all the gain claimed in the Fordham report.
Chad, you're right, though using 2003 as the starting point--more than a year after the law's enactment--isn't perfect either. Tom Loveless, author of our NAEP study, discusses this issue at length (see pages 18-20):
Another important consideration concerning time intervals should also now be apparent from examining the NAEP data. Three grade-subject combinations exhibit a consistent pattern, a straightforward story of narrowing gaps during the NCLB era--mostly the result of sharp gains by low-achieving students from 2000 to 2002 or from 2000 to 2003. But whether these years belong in the NCLB era is debatable. The starting point matters. Using the NAEP test immediately before NCLB's passage as a baseline, as this study does, includes growth that may have nothing to do with NCLB. Selecting a later date--2003, for example--and arguing that the act's accountability provisions could not have been implemented before then would lead to the conclusion that growth was much less during the NCLB era (although still statistically significant, as shown in appendix A), and that the gaps between low and high achievers were essentially unchanged. But it would also omit influence that NCLB may have had on NAEP scores during the debate and early implementation of the legislation.Neal and Schanzenbach provide an example. In the fall of 2001, "with the passage of NCLB looming on the horizon," the state of Illinois placed hundreds of schools on a watch list and declared that future state testing would be high stakes. If such actions influenced educators' behavior and students' test scores, an "NCLB effect" may have been registered in 2002. The bottom line is that there is no clear boundary between pre- and post-NCLB periods and no perfect way to delineate the NCLB era using the NAEP test years. Critics and defenders of NCLB alike can (and do) exploit this ambiguity to their advantage. The fairest approach is to point out the large gains in NAEP scores in the period around 1998-2003 and acknowledge that NCLB's association with these gains is unknown.
P.S. Readers can see the long-term trends for high-achieving students by viewing the foreword to the report, where they are presented since the 1990s. As we wrote, "Looking at long-term NAEP trends for the top 10 percent, one spots a steady line inching ever-so-slowly upward from the early 1990s to today. Enter NCLB, and nothing changes. It's ???benign neglect' in pictures."