While people may disagree over the importance of standards in driving student achievement, virtually nobody disagrees that selecting the right curriculum—one that artfully balances content and rigor and that gives teachers a clear instructional roadmap—is critical to driving student learning. In fact, research released in 2009 by Russ Whitehurst found that the most effective curricula had dramatically larger effect sizes than just about any other reform strategy.
Yet, there is a dearth of good, independent research that can help state, local, and school-level leaders determine which programs are most effective and which are most likely to meet the needs of the students they serve. That is why the results from a just-released report, published in Educational Evaluation and Policy Analysis, deserve some attention.
The study, “Large-Scale Evaluations of Curricular Effectiveness: The Case of Elementary Mathematics in Indiana,” focused on district-level curriculum adoption in the Hoosier state, mostly because Indiana is one of very few states that collects and tracks information about district-level curriculum adoption. This information allowed the researchers to investigate the relationship between curriculum and student achievement (as measured by the state’s ISTEP test).
Of course, the authors acknowledge that there are several limitations of the study, and the results don’t point to a clear “winner” or “loser” when it comes to elementary math curriculum. But, there are, in my opinion, three important take-aways.
1. States can exert enormous influence over curriculum decisions.
Every six years, the Hoosier State develops a list of “approved” programs and distributes that list to districts. Then district leaders work with teachers, parents, and community members to make curriculum adoption decisions. They can either choose a program from the approved list, apply to use an “alternative” curriculum that isn’t on the state list, or they apply to continue to use the textbooks and materials that were selected during the previous review cycle.
It is rare for a district to deviate from the state-approved curriculum list. In fact, during the 1998 adoption cycle, “over 98% of the districts in Indiana adopted math curricula from the approved list.” This means that, for better or worse, districts take direction from state departments of education seriously, and they are unlikely to eschew state recommendations.
2. How content is taught does not determine curricular effectiveness.
In math in particular, there is fierce debate between “traditionalists,” who favor teacher-directed learning) and constructivists who believe that students learn better when lessons are focused on conceptual learning and “discovery” methods. (Similar pedagogical debates rage on in nearly every core content area.)
There is a dearth of good, independent research that can help leaders determine which curricula are most effective.
One interesting, if unsurprising, take-away from this study is that pedagogy—how the material is presented—is less important than other factors, such as the content covered or curricular coherence. For example, of the three programs studied, both Saxon and Sliver-Burdett Ginn (SBG) focused on more traditional, teacher-directed methods, yet SBG “meaningfully outperformed Saxon.” It’s impossible to know precisely what accounted for the difference, but SBG presents content differently (related content is presented together in units, rather than being “spiraled” throughout the year) and covers more rigorous content (in second grade, Saxon only teaches addition and subtraction with two-digit numbers, whereas students following SBG are taught addition and subtraction up to three-digit numbers).
3. Context matters enormously.
Finally, this study supports the notion that, even when it comes to selecting effective curriculum, there is no one-size-fits all solution. In this report, for example, the authors found that Scott Foresman-Addison Wesley (SFAW), a “reform” math program that focuses on less-traditional pedagogy and teaching methods, outperformed Saxon. Yet, Whitehurst’s 2009 research came to the opposite conclusion. In that study, the end-of-year math achievement scores were 0.24 standard deviations higher for students who were taught using Saxon than for those who were taught using SFAW. (In a similar study, conducted in 2010, Saxon also outperformed SFAW.)
Why the differences? The authors suggest one possibility. They note that the 2010 study analyzed data from schools where students were significantly more disadvantaged than the average Indiana student. Perhaps Saxon better serves the needs of our most struggling students and that a different program, like SFAW, would better meet the needs of more advantaged students?
Alternatively, it’s possible (perhaps even likely?) that the SFAW program is better aligned to the ISTEP assessment and Indiana standards than Saxon in terms of both content and rigor. And, since the alignment between curriculum, instruction, and assessment is critical to proving a program’s effectiveness, any misalignment of content or rigor would impact the results. Either way, the bottom line is that context matters. Understanding the effectiveness of a particular program is important, but it’s also critical to understand whether the program is aligned to the content and rigor of the standards and of the assessments that will be used to measure student mastery (and teacher effectiveness).
Taken together, what does this mean for state-level Common Core implementation and for curriculum selection in particular? As FDR once said, “Great power involves great responsibility.” States are in a position to heavily influence curriculum selection, giving them the opportunity to affect the way the CCSS are implemented in nearly every public school classroom in the state. But they should take that responsibility seriously and work to focus far more time and attention on helping educators make the right curriculum choices for the students they serve.