Direct Instruction: The Rodney Dangerfield of curriculum
By Robert Pondiscio
By Robert Pondiscio
Did you hear the one about a curriculum with fifty years of research that actually demonstrates its effectiveness? There’s a new meta-analysis in the peer-reviewed journal the Review of Educational Research that looks at over five hundred articles, dissertations, and research studies and documents a half-century of “strong positive results” for a curriculum regardless of school, setting, grade, student poverty status, race, and ethnicity, and across subjects and grades.
Ready for the punchline? That curriculum is called “Direct Instruction.”
Hey, wait. Where’s everybody going? I’m telling you, Direct Instruction is the Rodney Dangerfield of education. It gets no respect.
I know what you’re thinking. “Direct Instruction? DISTAR, Corrective Reading and Reading Mastery? Basal programs? Scripted curriculum? That stuff’s been around since the Earth cooled. It’s not just old school, it’s the oldest school. Who cares about ‘DI’ when there’s so much cool, cutting edge, and disruptive stuff going on education? This is the age of ed tech, personalized learning, and competency-based progressions. The future is here and it’s OER, social media integration, virtual reality, and makerspaces. Direct instruction!? You gotta be kidding me. See you at SXSW EDU!”
Hold on and look again. The central assumption of DI is that every child can learn and any teacher can succeed with an effective curriculum and solid instructional delivery techniques. When a student does not learn, it doesn’t mean something is wrong with the student, DI disciples insist. It means something is wrong with the instruction. “Thus, the theory underlying DI lies in opposition to developmental approaches, constructivism, and theories of learning styles,” write Jean Stockard and Timothy W. Wood of the University of Oregon, lead authors of the new meta-analysis, “which assume that students’ ability to learn depends on their developmental stage, their ability to construct or derive understandings, or their own unique approach to learning,”
Ah…there’s your trouble, DI devotees.
Wait. There’s more. Direct Instruction is mastery-based and systematic. “If you fail to bring students to mastery in lessons 1-60, they’re going to be in trouble on lesson 70,” says education professor Marcy Stein of the University of Washington Tacoma. That precision and pacing—the instructional design work—is what hardcore fans love and even fetishize about DI. But those enthusiasts are outliers in education, for Direct Instruction, however effective, goes against the grain of generations of teachers trained and flattered into the certain belief that they alone know what’s best for their students. The curriculum will never be written—holds the conventional wisdom—that can override a teacher’s judgment about what every child needs. Fifty years of research? As another comedian, Richard Pryor, might have asked, “Who you gonna believe? Me? Or your lying meta-analysis?”
For a significant subset of teachers (actually a very large subset), the mere thought of a set curriculum imposes an intolerable burden on their autonomy and creativity. Yes, DI lessons are scripted, specifying “the exact wording and the examples the teacher is to present for each exercise in the program, which ensures that the program will communicate one and only one possible interpretation of the skill being taught,” according to the National Institute for Direct Instruction (NIFDI), an advocacy organization based in Oregon. This, as much as anything, probably explains how DI can be both highly effective and the perpetual wallflower at the curriculum dance hall.
Direct Instruction advocates are “not naïve enough to think that to be successful all teachers need to do is read the DI script,” Stein insists. It’s not “teacher-proof” as many critics state. Proper implementation, especially for struggling students, involves not only delivering the curriculum well but constantly monitoring students and responding to their confusion in a timely and effective manner. “Teachers often report that teaching this scripted program is much more difficult than teaching one that is less prescribed,” Stein says. There’s often confusion between “small d” direct instruction—shorthand for any teacher-driven, explicit instruction—and the various curriculum products associated with the work of Siegfried “Zig” Engelmann across subjects, but most famously in reading. The meta-study focuses on Engelmann “big D” Direct Instruction across reading, math, spelling, language, and other academic subjects.
Direct Instruction began at the University of Illinois in the mid-1960s as a preschool program for children from deeply impoverished homes. Those who swear by it frequently invoke the results of Project Follow Through, the largest and most expensive educational research study ever mounted by the federal government, which compared the outcomes of over twenty different educational interventions in high-poverty communities over a multiyear period. “External evaluators gathered and analyzed outcome data using a variety of comparison groups and analysis techniques,” Stockard and Wood note. “The final results indicated that DI was the only intervention that had significantly positive impacts on all of the outcome measures.” Their new meta-analysis brings the DI story up to the present, but the meta-narrative hasn’t changed.
Hey, can you hear me? Is this thing on?!
Rote or scripted, sequenced or not, loved or hated, shouldn’t half a century and hundreds of studies be enough to earn DI a little respect if education is so evidence-based? “We give lip service to evidence,” explains Doug Carnine, Professor Emeritus at the University of Oregon and a NIFDI board member. “We say ‘evidence-based’ because we have to fit with the new cultural norm, but it’s not a core value. It's tradition and ideology that prevails in education.”
Happily, there’s a burgeoning appreciation for the role of curriculum in improving teacher effectiveness and students outcomes, and increased sophistication around curriculum adoptions. But Carnine’s point is sound. There’s a long road ahead before education at large is a truly evidence-based field.
Stockard and Wood and their co-authors conclude that “The findings of this meta-analysis reinforce the conclusions of earlier meta-analyses and reviews of the literature regarding DI. Yet, despite the very large body of research supporting its effectiveness, DI has not been widely embraced or implemented.”
True that. Everybody hates Direct Instruction. All it does is work.
And that’s no joke.
The single best thing that could happen to American education in the next few years would be for the National Assessment of Educational Progress (NAEP) to begin regularly reporting state-by-state results at the twelfth grade level.
That this isn’t happening today is a lamentable omission, albeit one with multiple causes. But it’s fixable. All it requires is determination by the National Assessment Governing Board (NAGB) to make this change, some more contracting and sampling by the National Center for Education Statistics (NCES), and either a smallish additional appropriation or some repurposing of present NAEP budgets.
Way back in the late Middle Ages (i.e., 1988), Congress authorized NAEP to begin gathering and reporting state-level data to states that wanted it. This was a direct response to frustration on the part of governors in the aftermath of A Nation at Risk that they could not validly compare their own states’ academic performance to that of the nation or other states. Secretary Bell had responded with the famous “Wall Chart” of the mid-80’s, but its comparisons were based on SAT scores and other measures that were neither representative nor helpful for gauging achievement in the elementary and middle grades.
The Council of Chief State School Officers finally withdrew its ancient hostility to interstate comparisons, the Southern Regional Education Board piloted some comparisons among its member states, and the Alexander-James commission (chaired by then-Tennessee governor Lamar Alexander) recommended that NAEP take this on. In short order, an old-fashioned bipartisan agreement emerged, this time between the Reagan Administration and Senator Ted Kennedy.
Initially dubbed the “Trial State Assessment,” it was voluntary, it was limited to grades four and eight, and states that wanted in had to share the cost. But participation grew swiftly. By 1996, for example, forty-seven states and territories took part in the fourth grade math assessment and forty-four opted into eighth grade math. Already, more than thirty jurisdictions could see how their 1996 results compared with their 1992 results in math—and much the same thing happened in reading.
Then came NCLB in 2001 and suddenly all states were required to take part in fourth and eighth grade math and reading. The assessment cycle was accelerated to every two years, and Uncle Sam paid the full tab. (The Trial Urban District Assessment also launched in 2002 with six participating cities. By 2017, there were twenty-seven.)
Yet twelfth grade NAEP remained almost entirely confined to the national level, although NAGB and NCES piloted some state participation in 2009 and 2013, with thirteen states volunteering in the latter year. That it was tried shows that it can be done. But there’s been no follow-up. State-level twelfth grade data wasn’t even an option in the 2015 or 2017 assessment cycles.
Why the neglect? Budget has surely been a factor, but only one. Keep in mind that federal policy—at least the test-related elements of it—has concentrated on the elementary and middle grades and the only statutory NAEP mandates are for grades four and eight. Moreover, high-school curricula are more varied and a lot of kids are no longer in school by twelfth grade, meaning that a sample represents students but not all young people in that age group. There’s also been widespread mistrust of the twelfth grade assessment itself, particularly as to whether students take it seriously and produce reliable data.
To examine the latter point, NCES and NAGB undertook many studies, convened expert panels, and more. The upshot is pretty convincing evidence that kids do complete the twelfth grade test faithfully enough to yield solid results. What’s more, another set of studies undertaken by NAGB showed that a “proficient” score on the twelfth grade reading assessment—and a somewhat lower cut-point on the math test—are good proxies for “college readiness.” (Career readiness is less clear. Jobs are so varied and proficiency in reading and math is just part of what’s needed. NAEP isn’t an optimal gauge.)
Now consider where things stand in American education in 2018. Several developments seem to me compelling.
First, as is well known, ESSA has restored greater authority over education—and school accountability—to the states. States therefore have greater need for trustworthy data on education outcomes.
Second, the country is all but obsessed with whether kids are “college and career ready” at the end of high school—and also obsessed (with increasingly worrisome side effects) with graduation rates.
Yet, third, it’s really hard for educators and policy makers to know what college-ready means, how many of the students in one’s school, district, or state are attaining that level, and how that compares with other states and the nation as a whole. The Common Core State Standards did a good job of cumulating to college and (they said) career readiness by the end of high school, but that’s only helpful if states use those or equally rigorous academic standards and if the assessments based on such standards are truly aligned with them, have rigorous scoring standards, and set their “cut scores” at levels that denote readiness for college-level work.
For political reasons, however, dozens of states have bailed from—or at least repackaged—the Common Core (at Fordham, we plan to evaluate the standards that were substituted) and the multi-state testing consortia that were supposed to deliver comparable state data are both shrinking and refusing to make interstate comparisons.
The upshot: Even as they write “college and career readiness” rates into their ESSA plans, many states have no reliable way to determine how many of their high school seniors are reaching that point and, regardless of what they use for standards and tests, practically none will be able to make valid comparisons with other states. Which is apt to put even more ill-considered pressure on graduation rates or else throw states back to SAT and ACT results even when those are useless for students who don’t take the tests. (Some states therefore now mandate—and pay for—SAT or ACT results for all their high school students. But those scores can’t be compared with anything from earlier grades.)
Back to NAEP: Now is the perfect time to resume reporting results for twelfth graders on a state-by-state basis and to do so on a regular cycle. As happened with grades four and eight, this could restart on a voluntary basis for jurisdictions that want it and—if federal budgets are tight—they could be asked to cover some of the cost. Reading, writing, and math are the obvious subjects to do this with, but how great it would be also to report twelfth grade state results in other core subjects, particularly science and history!
How often? NAEP scores don’t change much in two years. Four-year intervals would likely suffice for twelfth grade. (Indeed, much money could be recaptured for the budget if fourth and eighth grade reading and math testing were switched back to a four-year cycle, although that change needs Congressional assent.)
What would this do for the country? It would—obviously—give participating states a valid and reliable metric for how many of their students are truly college-ready at the end of high school. Because NAEP is based on a sample, it would discourage the kinds of test prep, credit recovery, grade changing and rate faking that afflict graduation data—and that often afflict state assessments. (Because NAEP employs just a sample of students and schools, it also means less curricular distortion and pressure on teachers.) And it would enable state leaders to see precisely how their twelfth graders are doing when compared with other states and with the country as a whole.
Yes, this really feels like the single best thing that could happen to American education in the next few years. If NAGB, NCES, the Education Department and Congressional appropriators get moving, it could surely happen by the 2021 NAEP cycle—and I’ll bet it could be revived and re-piloted in at least a few states in 2019.
Might such an initiative be announced when the 2017 NAEP results are (finally) unveiled in April?
On this week's podcast, Conor Williams, senior researcher at New America, joins Mike Petrilli and Alyssa Schwenk to discuss the state of English language learners. During the Research Minute, Amber Northern examines teacher screening and hiring in Los Angeles public schools.
Paul Bruno and Katharine O. Strunk, “Making the Cut: The Effectiveness of Teacher Screening and Hiring in the Los Angeles Unified School District,” Calder (January 2018).
Eight years after their adoption by the vast majority of states, public misconceptions about the Common Core State Standards (CCSS) still abound. Even our president seems confused about what exactly the standards are, how they were adopted, and what the federal government can and can’t do to abolish or impose them on states. Given their pervasiveness, is it possible to correct common public misconceptions about Common Core? And to what extent might changing public support for education policies ultimately aid in their implementation?
A new study released via Brookings’ Evidence Speaks series last month explores these issues by employing the fairly simple intervention strategy of a “refutation text,” which comes in various lengths and types and is “written for the purpose of changing widely held misconceptions.” The study sought to answer three key questions: First, what impact does a refutation text have on respondents’ correct conceptions and misconceptions regarding the CCSS? Second, to what extent does it reduce the relationship of political views with correct conceptions and misconceptions? And third, are effects only immediate, or do they persist over a week?
Using a sample of six hundred respondents garnered via Amazon’s Mechanical Turk service, researchers Stephen Aguilar, Morgan Polikoff, and Gale Sinatra, all of USC’s Rossier School of Education, surveyed respondents about their overall support for the CCSS, their sources of information on the standards, and common CCSS misconceptions, such as whether the CCSS were developed by the Obama administration and whether Common Core prevents states from adding content to the standards. They then provided half of respondents with a brief refutation text correcting common misconceptions before asking them the same series of questions again. The other half of respondents, comprising the study’s “control group,” were provided excerpts from a 2015 Education Week article overviewing the CCSS (a “placebo”) in place of the refutation text.
The study’s results promisingly suggest that treatment (i.e., receiving the refutation text) “significantly reduced misconceptions and increased correct conceptions” when compared to the study’s randomly assigned control group, though both groups saw improvements. Exposure to the refutation text also improved respondents’ overall attitudes towards the CCSS and appeared to annul the relationship between political views and CCSS misconceptions. When researchers re-surveyed the same participants one week later, they found effects on participants’ conceptions and misconceptions largely persisted over that time.
The politics and misconceptions surrounding Common Core took many advocates by surprise following their creation in 2010, and it’s unfortunate and downright shocking that many of these misconceptions continue today. Although much more research needs to be done on the use of refutation texts in the education realm—such as whether effects persist for longer than a week and whether results can be replicated with non-self-selecting survey respondents—this study encouragingly suggests that they may be one tool with which to address common public misconceptions about education policies and reforms. It’s also worth noting that this study’s effects likely would have been even larger if the control group had instead received no information, rather than excerpts from an online article on the topic.
SOURCE: Stephen J. Aguilar, Morgan S. Polikoff, Gale M. Sinatra, “When public opinion on policy is driven by misconceptions, refute them,” Brookings (January 2018).
In recent years there’s been a big push in many states for universal pre-K programs, which make access to preschool education available to all families. And that push appears to be working: 1.5 million three- and four-year-olds were served nationally as of 2015–16 at a cost of $7.4 billion. This study from Urban Institute’s Erica Greenberg presents results from the first nationally representative poll of one thousand American adults on their preferences for universal pre-K (i.e., publicly funded pre-k for all kids) versus targeted pre-k (publicly funded pre-k for poor kids). It uses data from a larger 2013 survey developed through the Laboratory for the Study of American Values at Stanford University.
What’s interesting is the survey uses a novel approach to test potential reasons that the American public may or may not support particular forms of preschool, and this merits some discussion. All respondents are first told that these programs are free for families who use them. Then the analyst tests the effect of “financial self-interest” on support for pre-K by randomly assigning respondents to one of two scenarios: in the first, the cost of the program is incurred on the respondent; and in the second, it is paid through external sources. Specifically:
[Option 1] Most experts agree that if the government is going to pay for preschool, taxes may have to be increased on households like yours.
[Option 2] Most experts agree that the government can pay for preschool without increasing taxes on households like yours.
All respondents are then provided descriptions of both targeted and universal programs and asked, “Do you support or oppose the government funding programs like these?” Instead of preferring one or the other, respondents can show strong, weak, or equal support or opposition for one or both programs. Finally, the survey tests whether support is “racialized,” meaning whether respondents associate targeted programs with a particular racial or ethnic group. All receive a prompt describing a targeted program and are then shown one of two random pictures depicting the program: one of a white teacher with two white students, and the other of the same teacher with two black students. They are then asked if they would support or oppose the program.
The study finds that, on average, there is moderate support for targeted and universal pre-K, with no distinguishable preference for either. Roughly one-third equally support or oppose both forms. And a plurality of 36 percent has no preference, meaning they are more likely to favor than oppose both approaches, with the remainder landing squarely in the ambivalent category (i.e., they neither support nor oppose).
Across the sample, the possibility of higher taxes has no statistically distinguishable effects on support for targeted programs, meaning Americans feel equally favorable toward public investments in low-income preschoolers, whether or not they may have to pay more taxes to fund them. But the threat of higher taxes substantially decreases support for universal preschool by nearly a quarter of a standard deviation. And a small number of subgroups appear consistent in their level of support for both targeted and universal preschool regardless of the possibility of higher taxes—including black and low-income respondents and parents of school-age children, among others.
Respondents who respond positively to an “egalitarianism scale,” which measures beliefs about income equality (e.g., “If wealth were more equal in this country, we would have many fewer problems”), also tend to support preschool in general, especially the targeted kind.
Finally, there is no significant difference in overall support for targeted preschool relative to the race of children attending or by any particular subgroup of respondents (disaggregated by race, income, level of education, etc). Yet some subgroup differences did surface: Specifically, self-identified Democrats, liberals, and egalitarians favor targeted approaches, while Republicans, conservatives, and inegalitarians favor the universal approach.
That last finding clearly upends common dogma—and deserves its own research study. But the fact that the threat of higher taxes decreases support for universal pre-K should come as no surprise. In the end, although both kinds of preschool enjoy similar support, the tide changes when taxpayers are asked to consider tapping their own wallets to provide services to all children, regardless of circumstance. That’s something that policymakers looking to advance universal pre-K should keep in mind.
SOURCE: Erica H. Greenberg, “Public Preferences for Targeted and Universal Preschool," AERA Open (January-March 2018).