The new study from the Harvard Center for Education Policy Research was clearly a herculean effort, with data collection across six states, surveys of thousands of teachers, and the participation of some of the nation’s leading researchers. And its conclusion is an important if disappointing one to “curriculum reform” advocates like me: Overall, it found no variation in student achievement gains associated with the choice of elementary school math textbooks.
As the authors write, perhaps we shouldn’t be so surprised by its null findings. After all, most teachers reported receiving just a few days of professional development on the math programs they were assigned, and say they are supplementing it with materials they find online. Plus, it’s possible that math textbooks have become more similar in the post–Common Core era. Given that, the results make sense.
However, it’s important to understand that the study could not, and did not, address some of the most important questions related to the potential power of curriculum reform. Namely:
- Does the adoption and effective implementation of a strong curriculum lead teachers, and/or schools, to become more effective over time?
- Does the impact vary by type of teacher and/or type of school? For example, is curriculum choice and implementation more important for lower performing teachers and schools than for higher performing ones?
These questions are important because they address the hypothesis that many of us curriculum reformers would propose. It’s not that we think curriculum choice is the most important driver of student achievement; the research clearly shows that teacher effectiveness deserves that honor, at least among the things that schools can control. But, as David Steiner, Jacqueline Magee, and Ben Jensen recently wrote in in a literature review on the topic, “The research is increasingly clear that quality curriculum matters to student achievement.” We do suspect that a great curriculum that’s aligned with standards, designed with teachers, and usable in real classrooms can indeed help teachers become more effective. At least if—and this is a big “if”—those teachers receive lots of training, support, coaching, and feedback regarding how to implement the program with their own students. Most likely, we argue, such an approach will help the newest and weakest teachers the most, thus raising the floor and shrinking the gap between the most and least effective teachers.
Alas, there was no way this study could address those questions directly because doing so would require more years of data than the scholars had access to (plus data on teachers they did not have), more schools that changed textbooks during the study period, information about when those changes took place, and lots more insight into the implementation effort.
More years of data would have allowed the researchers to calculate a reliable measure of school effectiveness before and after a change in curriculum. (Same for teacher effectiveness, if they could get those data, too.) Ideally, that would mean at least three years on either side of the change, or six years total. Eventually, we may have that long of a trend line in the post–Common Core assessment era, but we don’t have it yet.
But then we also have to find a whole bunch of schools that switched textbooks. Researchers only had data about textbook switches in California. It’s possible that more schools (and districts) will make changes as time goes on, when adoption cycles come around or new books are published. If so, this part should be more feasible in the future, too.
And we need much better information about which textbooks schools are using—information that states could start collecting regularly, like California does.
There’s no doubt that the all-star roster of researchers did the most with the data they had. But we will need even better data to answer the most important questions—and for those we will have to wait.