Yes, impacts on test scores matter
By Michael J. Petrilli
By Michael J. Petrilli
This is the fifth and final article in a series that looks at a recent AEI paper by Colin Hitt, Michael Q. McShane, and Patrick J. Wolf, “Do Impacts on Test Scores Even Matter? Lessons from Long-Run Outcomes in School Choice Research.” Read the previous articles, “How to think about short-term test score changes and long-term student outcomes,” “When looking only at school choice programs, both short-term test scores and long-term outcomes are overwhelmingly positive,” “For the vast majority of school choice studies, short- and long-term impacts point in the same direction,” and “Findings about school choice programs shouldn’t be applied to individual schools.”
All week I’ve been digging into a recent AEI paper that reviews the research literature on short-term test-score impacts and long-term student outcomes for school choice programs. Here I’ll summarize the paper and what I believe is wrong with it, and conclude by calling on all parties in this debate to discuss the existing evidence in much more cautious tones.
What the AEI authors did
Hitt, McShane, and Wolf set out to review all of the rigorous studies of school choice programs that have impact estimates for both student achievement attainment—high school graduation, college enrollment, and/or college graduation. To my eye, they did an excellent and comprehensive job scanning the research literature to find any eligible studies, culling the ones lacking the sufficient methodological chops, and then coding each as to whether they found impacts that were statistically significantly positive, insignificantly positive, insignificantly negative, or significantly negative.
They then counted to see how many studies had findings for short-term test score changes that lined up with their findings for attainment. After running a variety of analyses, Hitt, McShane, and Wolf concluded that “A school choice program’s impact on test scores is a weak predictor of its impacts on longer-term outcomes.”
Where the AEI paper erred
But the authors made two big mistakes, as I argued Tuesday and Wednesday:
Both of these decisions are subject to debate. There’s an argument for making the choices the authors did, but also for them going the other way. What’s key is that different choices would have resulted in dramatically different findings.
For bona fide school choice programs, short-term test scores and long-term outcomes line up most of the time
If we focus only on the true school choice programs—private school choice, open enrollment, charter schools, STEM schools, and small schools of choice—and we look at the direction of the impacts (positive or negative) regardless of their statistical significance, we find a high degree of alignment between achievement and attainment outcomes. That’s because, for these programs, most of the findings are positive. (That is good news for school choice supporters!)
School Choice Programs’ Impacts On…
ELA | Math | High School Graduation | College Enrollment | College Graduation | |
Positive and Significant | 8 (38%) | 9 (43%) | 10 (45%) | 4 (50%) | 1 (50%) |
Positive and Insignificant | 8 (38%) | 7 (33%) | 7 (32%) | 3 (38%) | 1 (50%) |
Negative and Insignificant | 4 (19%) | 4 (19%) | 2 (9%) | 1 (12%) | 0 (0%) |
Negative and Significant | 3 (14%) | 1 (5%) | 3 (14%) | 0 (0%) | 0 (0%) |
Thirty-eight percent of the studies show a statistically significantly positive impact in ELA, 43 percent in math, 45 percent for high school graduation, 50 percent for college enrollment, and 50 percent for college graduation. If we look at all positive findings regardless of whether they are statistically significant, the numbers for school choice programs are 76 percent (ELA), 76 percent (math), 77 percent (high school graduation), 88 percent (college enrollment), and 100 percent (college graduation). Everything points in the same direction, and the outcomes for achievement and high school graduation—the outcomes that most of the studies examine—are almost identical.
We also find significant alignment for individual programs. The impacts on ELA achievement and high school graduation point in the same direction in seventeen out of twenty-two studies, or 77 percent of the time. For math, it’s thirteen out of twenty studies, or 65 percent. For college enrollment, the results point in the same direction 100 percent of the time.
Here’s how that looks for the specific studies:
Impact Estimates on Achievement and Attainment
Achievement | High School Graduation | College Enrollment | College Graduation | |||||
Neg. | Pos. | Neg. | Pos. | Neg. | Pos. | Neg. | Pos. | |
Private School Choice | ||||||||
New York City Vouchers | ELA/ | X | X | |||||
Milwaukee Parental Choice | ELA/ | X | ||||||
DC Opportunity Scholarship | ELA/ | X | ||||||
Open Enrollment | ||||||||
Chicago Open Enrollment | ELA | X | ||||||
Charlotte Open Enrollment | ELA/ | X | X | X | ||||
Charter Schools | ||||||||
NYC Charter High Schools | ELA/ | X | ||||||
Harlem Promise Academies | ELA/ | X | X | |||||
Boston Charter Schools | ELA/ | X | X | |||||
Chicago Charter High Schools | ELA/ | X | X | |||||
Florida Charter Schools | ELA/ | X | X | |||||
Seed Charter School DC | ELA | Math | X | |||||
High Performing California Charter High Schools | ELA/ | X* | ||||||
Texas Charters | ELA/ | X* | ||||||
Texas Charters “No Excuses” | ELA/ | X | ||||||
Texas Charters “Other” | ELA/ | X | ||||||
KIPP – enrolled for MS & HS | ELA/ | X* | ||||||
KIPP – enrolled for just HS | ELA/ | X | ||||||
Mathematica- “CMO 2” | ELA | X | X | |||||
Mathematica- “CMO 5” | ELA/ | X | ||||||
Mathematica- “CMO 6” | ELA/ | X | ||||||
STEM Schools | ||||||||
Texas I-STEM Schools | ELA/ | X | ||||||
Small Schools of Choice | ||||||||
Small Schools of Choice – NYC | ELA/ | X | X | |||||
Small Schools of Choice – Chicago | ELA/ | X |
*Dropout rate rather than graduation rate.
**Does not mention math or ELA specifically.
Studies found that three school choice programs improved ELA and/or math achievement but not high school graduation: Boston charter schools, the SEED charter school, and the Texas I-STEM School. But there’s an obvious explanation: These are known as high-expectations schools, and such schools tend to have a higher dropout rate than their peers. That hardly means that their test score results are meaningless.
At the same time, there were four programs that “don’t test well”—initiatives that don’t improve achievement but do boost high school graduation rates: Milwaukee Parental Choice, Charlotte Open Enrollment, Non-No Excuses Texas Charter Schools, and Chicago’s Small Schools of Choice. (Charlotte also boosted college graduation rates.) Of these, only the Texas charter schools had statistically significantly negative impacts on achievement (ELA and math) and significantly positive impacts on attainment (high school graduation). That’s a true mismatch, and cause for concern. But it’s just a single study out of twenty-two.
Meanwhile, the eight studies that looked at college enrollment all found that test score impacts and attainment outcomes lined up: seven positive and one negative.
So is it fair to say, as the AEI authors do, that “a school choice program’s impact on test scores is a weak predictor of its impacts on longer-term outcomes”?
Hardly.
Where the debate goes from here
I’ve tried this week to keep my critiques substantive and not personal. I know, like, and respect all three authors of the AEI paper, and believe they are trying in good faith to help the field understand the relationship between short- and long-term outcomes.
What I hope I have demonstrated, though, is that their findings depended on decisions that easily could have gone the other way.
What all of us should acknowledge is that this a new field with limited evidence. We have a few dozen studies of bona fide school choice programs that look at both achievement and attainment. Most of these don’t examine college enrollment or graduation. That’s not a lot to go on, especially considering the many reasons to be skeptical of today’s high school graduation rates.
Given this reality, we should be cautious about making too much of any review of the research literature. I believe, as I did before this exercise, that programs that are making progress in terms of test scores are also helping students long-term. Others believe, as they did before this exercise, that there is a mismatch. Depending on how you look at the evidence, you can argue either side.
What we don’t have, though, is a strong, empirical, persuasive case to ditch test-based accountability, either writ large or within school choice programs.
Do impacts on test scores even matter? Yes, it appears they do. We certainly do not have strong evidence that they don’t.
Last month, the American Enterprise Institute published a paper by Colin Hitt, Michael Q. McShane, and Patrick J. Wolf that reviewed every rigorous school-choice study with data on both student achievement and student attainment—high school graduation, college enrollment, and/or college graduation. They contend that the evidence points to a mismatch, specifically that “a school choice program’s impact on test scores is a weak predictor of its impacts on longer-term outcomes.”
This week, I plan to write a series of commentaries on the paper, which I believe is fundamentally flawed. I have several concerns, including:
That’s a lot to unpack, so I’m going to do this over several posts. I hope you’ll join me for the ride.
First, though, let’s consider the value of looking at long-term outcomes and what they can and cannot tell us about schools and programs.
Let’s start with where the AEI authors got it right. We all hope that our preferred education reforms, at least the most ambitious ones, will “change the trajectories of children’s lives.” That’s particularly true for children growing up in poverty, who typically face depressing odds of success if they attend mediocre schools. They may drop out before finishing high school, or they might graduate but with minimal skills. Either way, they’re unlikely to complete postsecondary education or training or master the “success sequence,” and are thus likely to work low-wage jobs and not enjoy the fruits of upward mobility. Many will struggle to form an intact family, and their children will grow up poor and then repeat the cycle. It’s a bleak picture.
It’s different for affluent children, of course. Great schools may change their trajectories, too, but it will be subtler because they will probably do reasonably well regardless due to the advantages they usually receive at home. Most will graduate high school and go to college no matter what. But with a great education, they have a good chance of learning more, which will help them get into and through a better college and eventually do more and better in the labor market.
For all children, we hope that fantastic schools will benefit them in other long-term ways that are important but harder to measure: encouraging them to become active, informed citizens; identifying strengths and interests that they might put to good use in a career; and helping them become people of good character.
So, yes, long-term outcomes are extremely important, especially for children for whom positive outcomes are far from assured.
Will all effective programs show long-term effects?
As the AEI authors write, it would be deeply disappointing if major reforms of schools and schooling moved the needle on test scores but didn’t have a lasting impact on kids’ lives. In effect, we’d be fooling ourselves into thinking that we’re making a big and enduring difference when in reality we might be wasting our time or money that could better be spent on other strategies.
Of course, it would be unfair to apply such a high standard to small, incremental changes, like adopting a better textbook or extending the school day. Nobody would expect tiny tweaks to have profound impacts, such as transforming a future high school dropout into a college graduate. But if they help lots of students become marginally more literate or numerate, they are still worth doing.
For major, expensive, disruptive reforms, however, stronger long-term outcomes are not too much to ask. And of course the most effective reformers are already aware of this. Consider KIPP, which has been obsessed from day one not just about student achievement but also at getting its KIPPsters to and through college, consistently tracking its numbers and refining its model. Which has properly led the organization to look at much more than just short-term test scores as indicators of whether their students are on track. Much of today’s interest in non-cognitive skills grew out of this work.
If a program showed large and significant impacts on achievement, especially for low-income kids, but poor results on long-term outcomes, it would certainly raise alarms. It might indicate that the program was overly focused on reading and math, or was teaching to the test, or was crowding out other strategies or activities that would actually help kids succeed in the long run. Unless, of course, there were issues associated with the long-term measures themselves.
The problem with high school graduation rates
It almost goes without saying, but at a time when high school graduation rates are skyrocketing, in part thanks to “credit recovery” initiatives and other dubious practices, we must view this measure with a healthy dose of skepticism. Now that most states allow students to graduate without passing an exit exam, or even a set of end-of-course exams, we cannot pretend that the standards for graduation are consistent from school to school. As a result, we must be careful with how we interpret positive or negative impacts on high school graduation rates, especially when evaluating high schools themselves. Boosting graduation rates might mean that schools are better preparing students to succeed—or it could mean that they have lowered their standards.
It would also be inappropriate to assume that a high school that boosts student achievement and better prepares its graduates to succeed in college would also raise its own graduation rates. In fact, we know from a 2003 study by Julian Betts and Jeff Grogger that there is a trade-off between higher standards (for what it takes to get a good grade) and graduation rates, at least for children of color. Higher standards boost the achievement of the kids who rise to the challenge, and helps those students longer-term, but it also encourages some of the other students to drop out. If a high school could manage to both boost achievement and keep its graduation rate steady that would be an enormous accomplishment. Yet by the logic of the AEI review, such a school would show a mismatch between short-term achievement impacts and “long term” attainment ones. Such logic is faulty.
Programs that don’t test well
On the other end of the scale are programs that appear to be failures when judged by short-term test-score gains, but that produce impressive long-term results for their participants. It’s this category that most concerns Hitt, McShane, and Wolf, especially in the context of school choice. “In 2010,” they write, “a federally funded evaluation of a school voucher program in Washington, DC, found that the program produced large increases in high school graduation rates after years of producing no large or consistent impacts on reading and math scores.” Later they conclude that “focusing on test scores may lead authorities to favor the wrong school choice programs.”
It’s a legitimate concern, and one I share (setting aside my misgivings about high school graduation rates expressed above). I played a tiny role in helping launch the D.C. voucher program when I served at the U.S. Department of Education, and I support the expansion of private school choice programs for low-income students. I can imagine why the private schools in the D.C. program might struggle to improve test scores, especially when compared to highly effective (and highly accountable) D.C. charter schools and an improving public school system. But I can also imagine that the experience of attending a private school in the nation’s capital could bring benefits that might not show up until years later: exposure to a new peer group that holds higher expectations in terms of college-going and the like; access to a network of families that opens up opportunities; a religious education that provides meaning, perhaps a stronger grounding in both purpose and character, and that leads to personal growth.
It would be a shame—no, a tragedy—for Congress to kill this program, especially if it ends up showing positive impacts on college-going, graduation, and earnings. The same might be said about large voucher programs in Ohio, Indiana, and Louisiana, all of which have shown disappointing early findings in terms of student achievement but might be setting children on paths to future success. Policymakers should be exceedingly careful not to end such programs prematurely.
***
There are therefore several things to think about as we further explore the AEI study: long term outcomes do indeed matter a lot, especially for poor kids; if large test-score gains don’t eventually translate into improved long term outcomes, it is a legitimate cause for concern; and we must stay open to the possibility that some programs could help kids immensely over the long haul, even if they don’t immediately improve student achievement. At the same time, we should be skeptical about using high school graduation rates as valid and consistent measures of attainment.
So is there really a mismatch between short-term scores and long-term outcomes, especially for school choice programs? And do existing studies really raise red flags about using test scores to make decisions about individual schools? Tune into tomorrow’s installment to find out!
An enduring finding since the main NAEP assessment’s inception almost three decades ago is the relative performance of different racial and ethnic groups—i.e., achievement gaps. See, for example, figure 1.
Figure 1. NAEP average scale scores, eighth grade math, 1990–2017*
*Note that the line for Asian student scores is broken at 1996 because the applicable data didn’t meet reporting standards in that year.
There are myriad reasons for these score differences, and many concern commonly cited correlates of achievement that NAEP tracks, such as family income, parent education level, and the primary language that a child is exposed to at birth. (There are also the significant and harmful effects of racism, which are further discussed below and are evinced by these enduring gaps and adversely affect all of this but cannot be directly measured by NAEP surveys.) Students who come from more affluent families, for example, or who have highly educated parents, or whose first language is English tend to do better in school and on tests than their less advantaged peers.
Yet together these commonly measured correlates fail to tell the whole story. See table 1.
Table 1. 2017 NAEP scale scores and student factors*
Average 8th grade math scale score |
Average 8th grade reading scale score |
Eligible for FRPL (%) |
Below federal poverty line (%) |
Parent is a college graduate (%) |
English language learner (%) |
|
Asian |
312 |
284 |
32 |
8 |
68 |
12 |
White |
293 |
275 |
28 |
8 |
66 |
1 |
Hispanic |
269 |
255 |
71 |
19 |
30 |
19 |
Black |
260 |
249 |
71 |
20 |
54 |
2 |
*The federal poverty rate data come from the U.S. Census. They are from 2016 and are for households that include related children under the age of eighteen. All other data come from NAEP.
Consider family income, represented by two measures: eligibility for free or reduced price lunch, for which a family must be at or below 1.85 times the federal poverty line; and the federal poverty rate for households that include related children under the age of eighteen. Taking into account standards of error, Asian and white students are approximately identical, as are Hispanic and black students. But their NAEP results aren’t. Asian students have long outperformed their white peers in math and reading, and the same is true of Hispanic children compared to black pupils.
Furthermore, parent education level and a student’s first language do little to explain these gaps. Compared to their white peers, for example, Asian students are equally likely to have college-educated parents—and are twelve times more likely to be English language learners. And Hispanic students are significantly less likely to have a mom or dad with a higher education degree than black pupils, and over than nine times more likely to speak English as a second language.
There are, however, at least two other variables to consider: marriage rates and parent expectations. Combined with the other correlates—income, parent education, and first language spoken—these two variables paint a more coherent picture. See table 2. (And see here for an explanation of what marriage means in this context, why it's being used, what causes may affect marriage rates, and some ways to mitigate the harmful effects of those causes.)
Table 2. 2017 NAEP scores and 2016 data on marriage rates, poverty rates, and parent expectations*
Average 8th grade math scale score |
Average 8th grade reading scale score |
Marriage rate (%) |
Marriage rate above federal poverty line (%) |
Marriage rate below federal poverty line (%) |
Below federal poverty line (%) |
Parents expect at least a 4-year degree (%) |
|
Asian |
312 |
284 |
84 |
87 |
60 |
8 |
90 |
White |
293 |
275 |
70 |
76 |
33 |
8 |
65 |
Hispanic |
269 |
255 |
60 |
68 |
37 |
19 |
72 |
Black |
260 |
249 |
36 |
46 |
12 |
20 |
61 |
*The federal poverty and marriage rate data come from the U.S. Census. They are from 2016 and are for households that include related children under the age of eighteen. And the parent involvement data come from “Parent and Family Involvement in Education: Results from the National Household Education Surveys Program of 2016,” National Center for Education Statistics, U.S. Department of Education (September 2017).
Relative to white familes, Asian households are much more likely to be headed by married parents—especially those earning an income below the federal poverty line—and more likely to have moms and dads who expect their children to complete at least a four-year degree. It thus seems quite plausible that Asian students outscore their white peers, despite having similar rates of poverty and parent education and monumentally higher rates of being English language learners, in part because they are more likely to live in two-parent households and with parents who expect them to earn a bachelor’s degree.
Similarly, the marriage rate among Hispanic households with children under the agree of eighteen is almost double that of comparable black households—and when you look at families living below the poverty line, the rate is more than three times higher. Hispanic parents are also 18 percent more likely to expect their children to earn at least a bachelor’s degree than black parents. This would help explain why Hispanic students outscore their black peers, even though they’re much less likely to have a parent who is a college graduate, much less likely to speak English as a first language, and equally likely to be poor.
To be sure, these are all correlations. No causation is demonstrated. And we know that myriad societal inequities and biases that aren’t measured by NAEP or the census and are outside the control of individual Americans also affect these measures: the stresses of multi-generational or extreme poverty; harmful laws that incarcerate black men at inexcusably high rates; overt, passive, and subconscious racism that affects one’s ability to secure a job, housing, and necessary care. All of that—and much more—influence these factors and outcomes.
Nevertheless, these data are suggestive and ought to encourage education analysts and researchers to focus more on the relationships between and among these variables—including family stability, as my colleague Ian Rowe has long urged—and how they individually affect academic achievement. And policymakers, community leaders, and education reformers ought to work to lessen core contributing foundational problems, like systemic racism and income inequality, and ought to consider these out-of-school factors more regularly and figure out how we can help mitigate their most harmful effects—and encourage the beneficial ones.
On this week's podcast, Paolo DeMaria, Ohio's State Superintendent of Public Instruction, joins Mike Petrilli and Brandon Wright to discuss the state’s new strategic plan for education, which Fordham’s gadflies find disappointing. On the Research Minute, Amber Northern examines the access, perseverance, and outcomes of first-generation college students.
Emily Forrest Cataldi et al., “First-Generation Students, College Access, Persistence, and Postbachelor's Outcomes,” National Center for Education Statistics, U.S. Department of Education (February 2018).
The highly wonky debate around whether “super subgroups,”—which combine smaller subgroups, like multiple racial minorities—actually meet the requirements of the Every Student Succeeds Act isn’t that exhilarating. But a recent study from George Washington University’s Matthew Shirrell suggests that these are far from humdrum decisions. How student subgroups are defined can impact key teacher outcomes.
The study examines the effects of NCLB-style subgroup accountability on teacher turnover and attrition. Recall that the No Child Left Behind Act required that schools make yearly improvement, not only in overall student achievement, but in the achievement of various subgroups. The study explores whether holding elementary school educators accountable for the performance of white and black students affected the likelihood that these teachers would leave their schools or leave teaching altogether.
Shirrell examines the initial year that subgroup accountability was implemented in North Carolina, using data from 1999–2000 (before NCLB was implemented in 2002–03) through 2003–04, and tracking teacher outcomes one and two years afterwards. He uses demographic data on every public school elementary teacher in the state, though he limits the study to black and white teachers because they comprise the vast majority of elementary school teachers (14 percent and 84 percent, respectively). North Carolina had a state minimum subgroup size of forty; schools with forty students in a particular subgroup were held accountable for those kids’ academic performance, and those with fewer than forty students were not. Thus, the study can make use of a regression discontinuity design, whereby one can examine outcomes for schools right near the cutoff, with the idea that schools with thirty-nine tested students are otherwise similar to schools with forty tested students, except for the subgroup accountability component. (Since the Tarheel State already had a strong state-level accountability system prior to NCLB, the counterfactual is essentially that strong system without the subgroup requirement.) Shirrell also conducted various empirical checks to ensure that schools did not manipulate the number of tested black or white students in NCLB’s first year, nor did teachers sort themselves on one side of the cutoff or the other.
A key finding is that subgroup-specific accountability for black and white subgroups had no overall effects on teacher turnover or attrition from the profession. Separate analyses by teacher race, however, revealed that it had a significant impact on the likelihood that black teachers remained or left teaching in North Carolina. Specifically, black teachers who taught in schools that were held accountable for the performance of black students were much less likely to leave teaching than were black teachers who taught in schools not held accountable for that subgroup.
However, black subgroup accountability did not affect the likelihood that white teachers left or remained in teaching. Moreover, accountability for the white student subgroup had no effects on either black or white teachers. Results for teacher turnover (meaning leaving the school versus leaving teaching) showed a similar pattern as those for black teacher attrition.
Shirrell speculates that, “Seeing that the black students ‘counted’ in their schools, and that their schools were taking action to address the achievement gap between black and white students, may have caused black teachers to remain in teaching that might otherwise have left. In below-cutoff schools black teachers might have been discouraged by their schools falling just short of the cutoffs and chosen to leave teaching.” Seems plausible enough.
What’s not addressed is whether grouping students on achievement—such as the lowest-performing 25 percent of students in a school (regardless of their race, income, or disability status)—would have similar effects on teacher turnover and attrition. States such as Connecticut, Massachusetts, and Florida might especially like to know the answer to that super-sized question.
SOURCE: Matthew Shirrell, “The Effects of Subgroup-Specific Accountability on Teacher Turnover and Attrition,” Education Finance and Policy (Forthcoming).
The assignments students complete in a classroom guide their learning and reflect teacher and school expectations. A new report from The Education Trust analyzes the quality of over 1,800 classroom math assignments, finding relatively strong alignment to standards but little focus on cognitive demand and rigor. Worse, the report finds significant differences in assignment quality between high- and low-poverty schools and honors and non-honors courses.
Researchers applied a framework of five elements to math assignments from twelve middle schools in six districts across three states: alignment to the Common Core, cognitive challenge, rigor, mathematical understanding, and the potential for motivation and engagement. They measured assignments using multiple “analysis indicators” for each of the five elements. Sixty-three teachers responsible for ninety-one math courses submitted all of their classroom assignments (tasks that students completed independently or with peers) over a two-week period. Half of the schools had free and reduced price lunch rates of above 65 percent and were classified as high poverty.
To determine whether an assignment was cognitively challenging, the researchers used Webb’s four "depth of knowledge levels.” They found that just 9 percent of assignments demanded strategic or extended thinking (levels three and four) rather than basic recall or application (levels one and two). At high-poverty schools, that number was just 6 percent, half the rate as at low-poverty schools. And for all eighth graders taking pre-algebra courses (rather than algebra I), only 3 percent of assignments were considered cognitively challenging.
The report also found that most assignments were missing key aspects of the Common Core math standards. Although the standards encourage students to develop deep understanding through a combination of three elements of rigor (procedural fluency, conceptual understanding, and application), researchers found procedural fluency incorporated more than twice as often as the other two elements (87 percent, compared to 38 and 39 percent, respectively). The standards also say that students must learn to justify conclusions and communicate their understanding, but less than 40 percent of assignments required students to write more than the answer. Again, a difference stood out between high-poverty schools, where 26 percent of assignments required answers to be justified, and low-poverty schools, where 38 percent required justification.
Absent from the report is a clear benchmark for what its authors think would be the ideal percent of assignments fulfilling various standards from the analysis framework, which the Education Trust says is best used to evaluate a set of assignments across multiple days or weeks. Without suggested goals for time spent on each criterion (or a sample two-week plan showing educators how the framework’s elements of assignment quality should progress), the report’s admonition that “we as educators must do more” remains a vague suggestion rather than a guide for practitioners.
Although the report shies away from providing guidelines for teachers and curriculum designers, it is clear about one problem: the inequity in assignment quality between high- and low-poverty schools. Low-rigor assignments reflect low expectations, and students only required to apply basic concepts will have a hard time competing with wealthier peers who are given more opportunities for strategic thinking. With several examples of model assignments and a thoughtful analysis framework, the report should prompt district- and school-level practitioners—particularly those in high-poverty areas—to critically examine the quality of their own math assignments.
SOURCE: “Checking In: Are Math Assignments Measuring Up?,” The Education Trust (April 2018).