The Nation's Report Card: Reading 2009
National Center for Education StatisticsMarch 2010
National Center for Education StatisticsMarch 2010
National Center for Education Statistics
March 2010
If you get a distinct tinge of d??j?? vu reading the 2009 NAEP reading report card, you're not alone. Results of this most-recent version of the test look a whole lot like those we saw in 2007.??Compared to two years ago, fourth grade reading scores are identical (221) and eighth grade reading scores improved by just one (though statistically-significant) point (from 263 to 264). What's more, very few of the subgroups tested saw any significant score improvements: Neither the fourth??nor eighth grade results show a statistically significant change in the black-white, Hispanic-white, or female-male achievement gaps from 2007 to 2009. In fact, since 1992, only the black-white achievement gap in grade 4 and the female-male achievement gap in grade 8 have narrowed.??Only one state, Kentucky, increased scores for both fourth and eighth??grade between 2007 and 2009; thirty-eight states saw no significant changes in either grade. A few other highlights include: Private schools and Catholic schools scored on average fifteen points higher than public schools in grade 4; Massachusetts is once again the top scorer for both fourth??grade and eighth??grade, with New Jersey a close second; and Washington, D.C. is last on the list, but at least can boast a five-point jump for fourth-graders. See the results??here.
Tom Loveless
Brookings Institution, Brown Center on Education Policy
March 2010
Although it didn't appear until one-third of the way into 2010, the Brown Center's 2009 report is as informative and deserving of attention as its predecessors. Part I offers a very sophisticated look at NAEP scores--two kinds of NAEP scores, actually--and explains why the "main NAEP" shows math performance rising faster than other important indicators. But it also attests to real gains over time, at least in math, and to real narrowing of the gap between low- and high-achieving students. (The latter discussion updates a study that Loveless recently completed for Fordham.) Part II is, frankly, depressing but also revealing. Using California school-achievement data over a number of years, it reveals, in effect, that bad schools stay that way. They don't turn around. (A forthcoming Fordham study will show something very similar in multiple states.) Part III explores a variety of differences between "conversion" charter schools and start-up schools (again using California data) and tentatively suggests that--sorry, Secretary Duncan--converting a low-performing district school into a charter is no sure-fire way of producing better results. Very different issues but very much worth your while. Find it here.
Naomi and Victor Chudowsky
Center on Education Policy
March 2010
This new report from CEP brings good news and bad. The good: According to state assessments, there is no consistent gender gap between boys and girls in math in elementary, middle, or high school. The bad: Boys continue to lag behind girls in reading at all three levels. The report analyzes state-level 2007-08 test data in all 50 states for grades 4, 8, and high school (grade 10 or 11, depending on the year tested) and then compares those scores to 2002. In 2007-08, roughly even amounts of boys and girls scored proficient in math, and no state had a gap larger than ten percentage points. In reading, on the other hand, boys clock in behind girls at every grade level and in every state with measurable data (forty-five of the fifty qualified), with some gaps as large as or larger than ten points. Good news for the Buckeye State – Ohio’s largest gap was six percent, in 10th grade reading.
Unfortunately, this doesn’t tell us much, because the proficiency bar is so low in some, nay many, states that the higher-achieving group is already mostly above the bar. Thus, any improvements by the lower-achieving group will “close” the gap. Furthermore, gap comparisons don’t tell how well students are actually learning. A better metric is to look at average scores, which the report does briefly. It finds that gaps have actually increased in some states; in other words, more boys are reaching "proficiency," while their female classmates are outpacing them at higher and higher levels. An even better way to calculate these comparisons is with NAEP data. Luckily, the NAEP reading scores were just released, allowing us to do just that. The results are somewhat different. Between 2002 and 2009, the fourth-grade gender gap in reading remained the same--because average scores for both groups went up. And though the gap also stayed steady in eighth grade over the same period, it has actually narrowed since 1992. You can read CEP’s report here.
Pioneer Institute
Richard Cross, Theodor Rebarber, Kathleen Madigan, Bruce Bean
March 2010
This second Pioneer Institute report on student achievement gaps in Massachusetts (here's the first) analyzes gaps for black, Hispanic, and white students in several districts. It identifies those communities that are gap-reducing and those that are gap-widening. The authors compare reading and math scores of minority students at the district level to white students statewide, and those of white students at the district level to their white peers statewide. (They reason that if the district is low-performing, non-minority students also probably get a "deficient education" so it doesn't make sense to use them as a control.) Then they control for each town's average family income and educational attainment--the non-school factors that often bear the blame for poor students' low achievement--to predict the achievement gap in that district. (They call these "predicted gaps.") In lay language, they found a non-trivial number of districts with gaps smaller than predicted--i.e., performing better than districts with similar demographic challenges. The conclusion is, as the title indicates, demography isn't destiny. Hopefully cool methodological maneuvering like this can point us to more districts making headway in the achievement gap battle. Read it here.
What do two mediocre charter schools on opposite coasts have in common? They’re both slated to close come June on account of low enrollment, financial concerns, and subpar test scores. Justice Charter High, a Los Angeles Green Dot campus, and New Covenant School in Albany have been on thin ice in recent years, but they’ve both made great gains in and out of the classroom. In an area plagued by violence, Justice High provided a safe alternative but test scores were only so-so. New Covenant School missed its reading proficiency target for 2009-10 of 75 percent by seven points--still twenty points higher than the 48 percent proficiency of 2008-09. Closing a school is never an easy a decision, and when the school is trying with all its might to turn itself around, the decision is even harder. But at the end of the day, an A for effort is not enough; these schools need to get an A for results too, and we applaud these authorizers for being realistic about a school’s ability to turn itself around.
“Green Dot to close Justice Charter High School,” by Howard Blume, Los Angeles Times, March 22, 2010
“Despite Gains, Charter School Is Told to Close,” by Trip Gabriel, New York Times, March 18, 2010
If you’ve been wondering how the just-passed health care reform bill will affect your own coverage, consider the coverage of our nation’s teachers. Many enjoy the incredibly cushy Cadillac kind, courtesy of indefatigable unions and generous school boards. Those plans will be subject to an “excise tax” that policymakers hope will “bend the cost curve” (and help pay for the reform plan). Unfortunately, the reconciliation version of the bill pushes the enactment of that tax off until 2018, keeping our education system’s unsustainable benefits structure in place for another eight years. That’s a shame, because an immediate tax might have encouraged unions to push for less generous healthcare in return for higher salaries--a package that’s likely to be much more attractive to the hot-shot young teachers our system needs to recruit.
“Health-Care Reform: Implications for Teachers,” by Stephen Sawchuk, Teacher Beat, a blog of Education Week, March 23, 2010
We’re one step closer. “Common” standards for U.S. schools are knocking at the door. They won’t likely make it all the way in but even a partial entry is looking like it might do some good.
Two weeks ago, the National Governors Association and Council of Chief State School Officers released drafts of new “Common Core” academic standards in English language arts (ELA) and math for grades K-12. Already the object of much interest--and plenty of controversy--these are standards that, once revised and finalized, will be candidates for adoption by individual states in place of those they’re now using.
We’ll admit to seeing considerable merit in national standards done right. Done wrong, they would do more harm than good. So the proper question to ask at this juncture isn’t whether you’re for or against national standards in theory. It’s whether what’s been placed before us is worth taking seriously.
Until April 2, the public has an opportunity to comment on this draft (you can do so here). Earlier this week, the Fordham Institute released detailed comments, prepared by experts in these subjects whose judgment we trust. Our intent is neither to praise nor to bury the “Common Core” draft. It’s to give constructive feedback during a comment period that is intended to yield later improvements. And yes, we’ll return--we’re Gadflies, after all--to evaluate the final product upon release to see whether it sets the world-class standards we need. We’ll also appraise states’ current standards to see how they compare. Not every state will want to adopt the Common Core, but they ought to make those decisions on the basis of the relative strength of those standards versus their own--and we aim to supply them with some metrics.
Readers may recall that in October we published expert evaluations of the Common Core end-of-high-school draft then in circulation as well as other influential national and international standards and frameworks (NAEP, TIMSS, PISA). Our lead reviewers, W. Stephen Wilson (math) and Sheila Byrd Carmichael (ELA) concluded that the end-of-high-school draft was pretty good--they conferred “B” grades in both subjects--but they also offered numerous suggestions for strengthening it. This time around, they’ve examined the full K-12 draft, culminating (again) in end-of-high-school expectations. (This time Dr. Wilson was joined by a fellow math expert, Dr. Gabrielle Martino.)
On the math side, they found rigorous expectations that set forth most of the essential content that students in grades K-12 must master. While tweaks and nips and tucks and a few additions are needed--particularly at the high school level--the standards “embod[y] internationally-competitive expectations for students in mathematics.” They earned an impressive A-.
On the ELA side, the Common Core draft is also strong, but needs a few more adjustments. Some key content is missing or too shallow and some of the standards aren’t precise enough to provide essential guidance to educators, test builders, etc. “Despite [these] fixable flaws,” Ms. Carmichael concludes, “the standards do an admirable job of providing a roadmap for students to become ‘college- and career-ready.’” As written, the standards earn a solid B.
Bottom line: There is much to applaud in these drafts, but they can and should be even better.
But don’t take our word. See for yourself. New standards for U.S. schools are too important a project to entrust to experts. That’s the point of a public-comment period. We hope very much that parents, educators, employers, public officials, scholars, and so on will put in the time and effort needed to comb these drafts and offer their own feedback. You, too.
But please keep three questions in mind as you proceed:
First, recall that we’re dealing here only with math and ELA. Educated children also need high-quality science, history, art, and much more. Who is making that happen in your state or community?
Second, as the Common Core drafters acknowledge, without strong curricula (and, we would add, effective instruction and quality assessments), standards only describe the destination we’d like to reach--they don’t get us there. In the case of the ELA standards in particular, states (and districts, schools, and educators) bear a special responsibility to supplement these skills-centered standards with a solid, content-rich curriculum that is eventually aligned to rigorous assessments. Who will ensure that that happens in your schools?
Third, perfection is the wrong criterion by which to judge these standards. Perfect standards do not exist. The right question to ask as you consider our reviews, other commentators’ opinions, and the draft standards themselves are: Are they significantly better than what we’re using today? And how could they be improved?
Several states have already signaled that they will opt to keep their own standards rather than adopting the Common Core. So be it. A handful of states have done standards right. Most have not. And some places with strong standards have done a miserable job of implementing them.
The Common Core draft is pretty strong. We hope it gets stronger in the weeks ahead. Even then, however, it’s only the first step of a journey that is fraught with difficulties. States that choose to opt out should do so because they prefer a different destination, not because they’re afraid of challenge.
Enough is enough. At least that’s what the tiny school district of the “no-stoplight” town of Congress, AZ is saying to four women who have bombarded it with over 100 public records requests in eight years. The purpose of this paperwork? “I’m just an average citizen wanting to make sure that the money we’re paying is being used appropriately,” explains alleged FOIA-abuser Jean Warren. (That must explain her request for the serial numbers on both old and new school air conditioners.) Of course, this might not be such a big deal for a large school district, accustomed as they are to garden-variety paper-trail bureaucracy. But for this one-school school district (total enrollment 112), appeasing the requesters would require hiring a full-time clerk to process the load. The district has spent thousands already placating the pesky quartet. What’s the alternative? Sue them, which is what it’s doing. While we’re sympathetic to the distress this teensy district is facing, they clearly don’t have a leg to stand on. FOIA leaves little room for interpretation. But it’s situations like these that remind us that some pieces of legislation beg for flexibility.
“Tiny school district sues citizens who seek info,” by Pauline Arrillaga, The Associated Press, March 18, 2010
Naomi and Victor Chudowsky
Center on Education Policy
March 2010
This new report from CEP brings good news and bad. The good: According to state assessments, there is no consistent gender gap between boys and girls in math in elementary, middle, or high school. The bad: Boys continue to lag behind girls in reading at all three levels. The report analyzes state-level 2007-08 test data in all 50 states for grades 4, 8, and high school (grade 10 or 11, depending on the year tested) and then compares those scores to 2002. In 2007-08, roughly even amounts of boys and girls scored proficient in math, and no state had a gap larger than ten percentage points. In reading, on the other hand, boys clock in behind girls at every grade level and in every state with measurable data (forty-five of the fifty qualified), with some gaps as large as or larger than ten points. Good news for the Buckeye State – Ohio’s largest gap was six percent, in 10th grade reading.
Unfortunately, this doesn’t tell us much, because the proficiency bar is so low in some, nay many, states that the higher-achieving group is already mostly above the bar. Thus, any improvements by the lower-achieving group will “close” the gap. Furthermore, gap comparisons don’t tell how well students are actually learning. A better metric is to look at average scores, which the report does briefly. It finds that gaps have actually increased in some states; in other words, more boys are reaching "proficiency," while their female classmates are outpacing them at higher and higher levels. An even better way to calculate these comparisons is with NAEP data. Luckily, the NAEP reading scores were just released, allowing us to do just that. The results are somewhat different. Between 2002 and 2009, the fourth-grade gender gap in reading remained the same--because average scores for both groups went up. And though the gap also stayed steady in eighth grade over the same period, it has actually narrowed since 1992. You can read CEP’s report here.
National Center for Education Statistics
March 2010
If you get a distinct tinge of d??j?? vu reading the 2009 NAEP reading report card, you're not alone. Results of this most-recent version of the test look a whole lot like those we saw in 2007.??Compared to two years ago, fourth grade reading scores are identical (221) and eighth grade reading scores improved by just one (though statistically-significant) point (from 263 to 264). What's more, very few of the subgroups tested saw any significant score improvements: Neither the fourth??nor eighth grade results show a statistically significant change in the black-white, Hispanic-white, or female-male achievement gaps from 2007 to 2009. In fact, since 1992, only the black-white achievement gap in grade 4 and the female-male achievement gap in grade 8 have narrowed.??Only one state, Kentucky, increased scores for both fourth and eighth??grade between 2007 and 2009; thirty-eight states saw no significant changes in either grade. A few other highlights include: Private schools and Catholic schools scored on average fifteen points higher than public schools in grade 4; Massachusetts is once again the top scorer for both fourth??grade and eighth??grade, with New Jersey a close second; and Washington, D.C. is last on the list, but at least can boast a five-point jump for fourth-graders. See the results??here.
Pioneer Institute
Richard Cross, Theodor Rebarber, Kathleen Madigan, Bruce Bean
March 2010
This second Pioneer Institute report on student achievement gaps in Massachusetts (here's the first) analyzes gaps for black, Hispanic, and white students in several districts. It identifies those communities that are gap-reducing and those that are gap-widening. The authors compare reading and math scores of minority students at the district level to white students statewide, and those of white students at the district level to their white peers statewide. (They reason that if the district is low-performing, non-minority students also probably get a "deficient education" so it doesn't make sense to use them as a control.) Then they control for each town's average family income and educational attainment--the non-school factors that often bear the blame for poor students' low achievement--to predict the achievement gap in that district. (They call these "predicted gaps.") In lay language, they found a non-trivial number of districts with gaps smaller than predicted--i.e., performing better than districts with similar demographic challenges. The conclusion is, as the title indicates, demography isn't destiny. Hopefully cool methodological maneuvering like this can point us to more districts making headway in the achievement gap battle. Read it here.
Tom Loveless
Brookings Institution, Brown Center on Education Policy
March 2010
Although it didn't appear until one-third of the way into 2010, the Brown Center's 2009 report is as informative and deserving of attention as its predecessors. Part I offers a very sophisticated look at NAEP scores--two kinds of NAEP scores, actually--and explains why the "main NAEP" shows math performance rising faster than other important indicators. But it also attests to real gains over time, at least in math, and to real narrowing of the gap between low- and high-achieving students. (The latter discussion updates a study that Loveless recently completed for Fordham.) Part II is, frankly, depressing but also revealing. Using California school-achievement data over a number of years, it reveals, in effect, that bad schools stay that way. They don't turn around. (A forthcoming Fordham study will show something very similar in multiple states.) Part III explores a variety of differences between "conversion" charter schools and start-up schools (again using California data) and tentatively suggests that--sorry, Secretary Duncan--converting a low-performing district school into a charter is no sure-fire way of producing better results. Very different issues but very much worth your while. Find it here.