Ever since the federal government mandated annual standardized testing two decades ago, test preparation, i.e., instructional time spent preparing students for tests, has been hotly debated. Critics argue that it negatively affects teaching and learning by focusing instruction on rote and procedure over more complex content, while proponents contend that test prep can improve instruction if the tests themselves, and the academic standards that they assess, are rigorous and high-quality.
Oddly, there’s little research to substantiate the claims of either side. So let us welcome a recent study on these issues by David Blazar of the University of Maryland and Cynthia Pollard of Harvard’s Graduate School of Education.
Using data previously collected by the National Center for Teacher Effectiveness (NCTE), Blazar and Pollard analyzed two separate measures of test preparation to answer two questions. First, does test prep lead to lower-quality instruction? And, second, does it make any difference if teachers are teaching to a more cognitively demanding test?
The researchers used teacher surveys as well as transcripts from videotaped math lessons to determine how frequently teachers engage in test prep, and what types. The survey yielded self-reported data on how often teachers engaged in five common forms of test prep activities, including focusing instruction on students just below a given performance level on a state test and using standardized test items in their instruction. Blazar and Pollard also scoured transcripts of videotaped lessons to determine whether test preparation was a major focus of instruction. They then compared video and survey findings to teachers’ observation scores, as measured by the cognitive demand of math content provided to students, interaction with students, and the accuracy of content delivered.
After controlling for the background characteristics of teachers, students, schools, and districts, the researchers found that “test preparation is a significant and negative predictor” of instructional quality—though they stress that its negative effect is fairly modest and likely overstated in the current discourse. Somewhat surprisingly, when comparing the incidence of test prep to the rigor of respective state tests, they also found “little support for the moderating role of test rigor.”
The authors flag several major cautions. Most importantly, the data come from a relatively small, non-nationally representative sample of fourth and fifth grade mathematics teachers in Massachusetts, Georgia, and Washington, D.C., so it’s uncertain whether the findings can be generalized to other grades, content areas, and parts of the country. Data were also collected from 2010–13, prior to the administration of Common Core–aligned assessments such as PARCC and Smarter Balanced, so it’s unclear how test prep for “next-generation” assessments (which Fordham reviewed in 2015 and found to be generally high-quality) might affect instructional quality.
Given all of the criticism and fearmongering around test prep in recent decades, surprisingly little research has investigated the relationship between test preparation and instructional quality. This study sheds a bit more light on this heated issue, and underscores the many complexities surrounding instruction, assessment, and student learning. But we clearly need more light to illuminate this tunnel.
SOURCE: David Blazar and Cynthia Pollard, “Does Test Preparation Mean Low-Quality Instruction?,” Educational Researcher (October 2017).