As I noted in a recent post, attitudes toward advanced education are cyclical. From gifted education to talent development programs, from honors classes to AP, we have experienced a largely positive stretch of media attention and state-level policy gains. However, advanced education has started to come under fire, especially in urban districts. This pressure has only increased due to the economic crisis and heightened concerns about systemic racism.
Many of the current arguments against advanced programming have been around for a while: Those kids will be fine on their own, you shouldn’t separate students based on ability, teachers can just differentiate for every student in the regular classroom, advanced programs are biased against lower income students, etc. Advocates are fairly adept at countering these misunderstandings.
But a new argument has emerged over the past couple years: “Research says that advanced programs don’t work.” That’s ridiculous, but it has become increasingly common, often presented as a known fact. Over the last couple years, I have had several journalists share versions of “The research clearly says ability grouping/gifted education/acceleration/enrichment doesn’t work.”
And with concerted efforts to eliminate advanced programming in high-profile districts—New York City and Seattle being current examples—this narrative is popping up like an aggressive weed. For example, the NYC Student Diversity Advisory Group (SDAG) asserted throughout its August 2019 report that gifted programs are not associated with evidence of effectiveness, recommending instead that NYC schools use enrichment instead, which is like saying pain medicine doesn’t work, so use ibuprofen! Think tanks (see this piece as an example) have jumped into the fray to support the SDAG recommendations, making similar claims. All of these attacks tend to be very thinly-sourced—if they contain any supporting research at all. For example, the SDAG report has thirty-seven citations, the majority of which are newspaper stories, think pieces, or a report of the SDAG itself—in other words, not research. Contrast those reports with the recent report commissioned by the Massachusetts Department of Education, which is carefully sourced, even-handed, and—not surprisingly—generally positive in its conclusions about the effectiveness of advanced education.
Advanced learning programs are effective, and we have reams of research to support that conclusion. Do we have high-quality, gold-standard, replicated research supporting every possible intervention? Of course not. Show me a field that does. For example, we’ve spent billions to study how to help students learn to read, yet we still have a very wide range of opinions on how to do one of the most foundational tasks in all of education. Furthermore, a field that can be described in that way (i.e., everything is totally research-supported) would be largely absent of innovation, which isn’t a good thing.
What follows is a rough summary of intervention research, listed from more to less evidence of effectiveness. For the purposes of this post, “evidence of effectiveness” is defined as research on positive student outcomes, broadly defined, with a bias toward experimental studies. A colleague who provided feedback on this post made a great point that needs to be kept in mind: All this assumes that the interventions are well-designed and carefully implemented, and even in those cases, not every strategy works the same every time, even in similar contexts, for every student. But in general, research supports the following summaries:
Acceleration: One of the most-studied intervention strategies in all of education, with overwhelming evidence of positive effects on student achievement. So much supporting work that it is impossible to do it any justice here; I’ll just point people to these resources from the Belin-Blank Center and this meta-analysis. We don’t have a lot of evidence that acceleration strategies impact excellence gaps, and there are reasons to believe they probably don’t, at least not by themselves. But the effectiveness of acceleration regarding increases in student learning is hard to question at this point.
Ability grouping: Not as clear cut as acceleration research, but studies find convincing evidence that flexible ability grouping is a net positive for the student learning of our most and least advanced students. The meta-analysis linked above found evidence within-class ability grouping was most effective for promoting advanced learning among various grouping strategies, and other studies suggest that flexible ability grouping may help close excellence gaps. The field’s growing research base on curriculum models can also be placed in this category, and those studies suggest that pre-differentiated, prescriptive curricula leads to significant growth in advanced learning.
Enrichment: Much less third-party research and very few experimental studies, making it difficult to determine the actual impact of these programs. What we do have is mixed but generally promising, especially for summer, residential enrichment—such as these examples here, here, and here. (Full disclosure: I work for an organization that runs such summer program.) What would be most helpful are the types of studies done on problem-based learning many years ago, which gathered evidence that PBL-focused instruction didn’t hurt student test scores. The argument became that PBL brings the possibility for tremendous depth of student learning in addition to a range of other, highly desirable soft skills, such as communication, collaboration, creativity, interest development, all with no negative impact on test results. Having this type of research would greatly facilitate the implementation of enrichment interventions, and I won’t be surprised if enrichment-based interventions will eventually be found to be among the best, if not the best, creativity interventions. Regarding excellence gaps, we have very little high-quality research on the impact of enrichment programs.
Selective high schools: Among the oldest strategies for advanced education are public high schools that selectively choose high-performing students based on entrance exam scores, hence the label “exam schools.” Despite the long history of this approach to advanced learning, very few experimental studies exist, and other sophisticated research designs often produce mixed-to-negative results (examples here and here). The issues surrounding exam schools are complex and controversial, with longstanding questions about diversity, or lack thereof, in these schools, the value-added for students, and their ability to close excellence gaps.
Given the above evidence, educators and policymakers can reasonably conclude there are research-supported interventions to promote advanced learning. However, my suspicion is that most advanced students are currently being taught using heterogeneously-grouped differentiation. This approach, which relies heavily on teachers to differentiate curricula and instruction for the wide range of student performance levels in their classrooms, is generally the favored approach in teacher preparation programs, at least partially explaining its ubiquity.
Yet there is very little research that teachers are effective differentiators for the wide range of student ability and performance levels they find in their classrooms, and many teachers appear to understand differentiation’s limitations. Indeed, when I press my “whole-class differentiation first, last, and always” colleagues and friends for evidence that it works for gifted students, the reply is usually to list districts that are trying this approach. That’s not evidence. Lots of people drive huge SUVs, but that is not evidence that doing so addresses climate change. What evidence we have suggests teachers have a hard time differentiating in the absence of ability grouping (including programs such as Advanced Placement). And again, there’s little to no research on effects of this form of differentiation on excellence gaps, either positive or negative.
We have an abundance of research on advanced education, and on balance the evidence is positive—certainly more positive than for critics’ alternatives. Do we need more and better research, especially on how to ensure these programs and interventions work for the diverse student body in our schools, especially regarding how the interventions impact the lives of students based on class, race, ethnicity, and gender? Absolutely, but that can be said about any educational intervention. For now, advocates should be confident in the depth of the research base on advanced education.