Early ESSA plans don't do enough to signal that all students are important
By Brandon L. Wright
By Brandon L. Wright
By next week, sixteen states and the District of Columbia will have submitted plans to the U.S. Department of Education to meet their obligations under the Every Student Succeeds Act. These publicly available documents describe how states will satisfy a number of ESSA’s requirements, including those concerning testing, school improvement, and accountability. Unfortunately, just as states mostly squandered ESSA's school improvement flexibility, most of these first seventeen plans don’t do enough to hold schools accountable for meeting the educational needs of high achievers—especially those growing up in poverty.
ESSA affords states a critical opportunity to right many wrongs of No Child Left Behind. A strong accountability system signals to schools that the progress of all students is important, but NCLB failed at this by creating incentives for schools to focus their energy almost exclusively on helping low-performing students get over a modest proficiency bar, while neglecting those who were likely to pass state reading and math tests regardless of what happens in the classroom. This may be why the United States has seen significant achievement growth and improved graduation rates for its lowest performers over the last twenty years but lesser gains for its top students.
The Every Student Succeeds Act remedies this by permitting states to more accurately determine school quality than was possible under NCLB, by using growth measures and performance indexes in place of proficiency rates. “Percent proficient” is a poor measure of school quality, as it is, among other things, strongly correlated with demographics, family circumstance, and prior achievement.
Therefore, a strong accountability system features at least two elements:
Alas, as Table 1 demonstrates, only one state’s plan in the first bunch of seventeen meets both of these criteria—Colorado, which my colleague David Griffith has already praised—though Tennessee and Vermont come close. Six plans do virtually nothing to signal that all students, including high-achievers, are important: Delaware, the District of Columbia, Maine, Michigan, Nevada, and North Dakota.
Table 1. State ESSA plans’ use of performance indexes or average scale scores and emphasis on growth for all students
*State plans to use annual summative grades but hasn’t yet determined the weight for each indicator.
**State will not use annual summative grades.
Only seven of the first seventeen states to submit ESSA plans will measure achievement with performance indexes or average scale scores. Put another way, close to two-thirds will repeat No Child Left Behind’s mistaken focus on proficiency rates alone—measures that, as Morgan Polikoff observes, incentivize states to lower their standards, encourage schools and teachers to ignore low- and high-achievers, throw away vast quantities of useful information, and misrepresent achievement gaps and school effectiveness.
The news is a little better for growth. Two states, Colorado and New Mexico, deserve gold stars, as growth will count for at least 50 percent of summative grades in elementary and middle schools, and for at least as much as achievement in high schools. Another four come close. In Illinois and New Jersey, for example, growth constitutes 50 percent of summative ratings for elementary and middle schools, but 0 percent for high schools. And Tennessee and Vermont measure growth for all schools, including counting growth for more than 40 percent for grades K–8, but each misses the 50 percent mark. Meanwhile, one state, Oregon, has decided not to issue summative school grades; and two states, Maine and Massachusetts, have not yet determined the weight of their growth measures. Still, that leaves seven states that have missed this opportunity to make growth for all students a major factor in their ESSA accountability systems.
These various shortcomings signal to schools that high achievers—including those from disadvantaged backgrounds—don’t deserve to have their education maximized. Policymakers across the country seem to be acting on the oft-said but always-misguided view that higher-achieving students require less attention because they’ll be fine no matter what. Such neglect is inequitable. The students most harmed are disadvantaged high achievers—boys and girls who face such challenges as disability or poverty, or who comes from homes in tough neighborhoods, with ill-educated parents who don’t speak English. They depend far more than upper-middle-class students on the public education system to do right by them. So if they don’t receive the attention that they—like all children—deserve, many will fall by the wayside, destined by circumstances beyond their control never to realize their full potential.
The country also needs these children to be highly educated in order to ensure its long-term competitiveness, security, and innovation. Our highest achievers are young people who hold perhaps the greatest promise for making major advances in science, technology, medicine, the humanities, and much more. Our nation’s economic vitality and growth depend heavily on the quality and productivity of our human capital and its capacity for innovation.
In sum, sixteen of the first seventeen state ESSA plans fail to include accountability indicators that fully signal a commitment to providing all students with the education they deserve. And six of the sixteen do virtually nothing on this score. If we see a similar pattern with the states that submit plans in September, many of America's high achievers—especially those from disadvantaged backgrounds—will continue to be an afterthought, a fate no child should suffer.
Editor's note: A previous version of this article gave Arizona a "medium" rating for its emphasis on growth. This assessment was based on the plan the state submitted to the U.S. Department of Education in April 2017. Since then, however, the Arizona Department of Education revised its accountability plan to rely substantially on a growth-to-standard measure, which now accounts for 25 percent of K–8 annual summative grades, and 10 percent in high schools—the same weights as Arizona's measure of growth for all students. This is a mistake that decreases its rating in the category from "medium" to "weak."
“Those who cannot remember the past are condemned to repeat it.” It turns out this adage applies not just to global politics, but also to state education policies, and groups on both the left and the right should take heed.
No Child Left Behind (NCLB) is among the most lamented education policies in recent memory, and few of NCLB’s provisions received as much scorn as its singular focus on grade-level proficiency as the sole measure of school performance. Researchers and practitioners alike faulted the fetishizing of proficiency for things like:
(For more details on these criticisms and links to relevant research, see my previous writing on this topic.)
With some prodding from interested researchers and policy advocates, the Department of Education is allowing states to rectify this situation. Specifically, states now are permitted to use measures other than “percent proficient” for their measure of academic achievement under the Every Student Succeeds Act (ESSA). In previous posts, I recommended that the feds allow the use of performance indexes and average scale scores; performance indexes are now specifically allowed under the peer-review guidance the Department published a few weeks ago.
Despite this newfound flexibility, of the seventeen states with draft ESSA accountability plans, the Fordham Institute finds only seven have moved away from percent proficient as their main measure of academic achievement. In fact, the Foundation for Excellence in Education is encouraging states to stay the course with percent proficient, arguing that it is an indicator that students will be on track for college or career success. While I agree with them that proficiency for an individual student is not a useless measure, it is an awful measure for evaluating whole schools.
Sticking with percent proficient is a terrible mistake that will doom states to many of the same issues they had under NCLB. I implore states that are still finalizing their ESSA accountability systems to learn from the past and choose better measures of school performance. Specifically, I make the following two recommendations:
While both EdTrust and the Foundation for Excellence in Education recommend growth-to-proficiency measures, again, these are perhaps acceptable for individual students, but as measures of school performance there is no question these are not growth measures that approximate schools’ impacts.
Overall, the evidence on these issues is overwhelming. Educators and policymakers have complained about NCLB and “percent proficient” for as long as the policy has existed. With this evidence, and with the newfound flexibility under ESSA, there is no reason for any state to continue using percent proficient as a measure of school performance. Doing so in spite of our past experience all but ensures that many of NCLB’s worst problems will persist through the ESSA era.
Morgan Polikoff is an associate professor of education at the University of Southern California's Rossier School of Education.
Editor's note: A previous version of this article said that only six states have moved away from percent proficient as their main measure of academic achievement. That has since been updated to seven states, after Arizona did so on Monday, April 26.
The views expressed herein represent the opinions of the author and not necessarily the Thomas B. Fordham Institute.
On this week's podcast, Mike Petrilli, Ian Rowe, and Alyssa Schwenk discuss whether and how schools should teach the “success sequence.” During the Research Minute, Amber Northern examines the cross-subject effects of English language arts instruction.
Benjamin Master et al., “More Than Content: The Persistent Cross-Subject Effects of English Language Arts Teachers’ Instruction,” Educational Evaluation and Policy Analysis (February 2017).
A new report from the RAND Corporation examines trends across twenty-seven counties in Ohio, Pennsylvania, and West Virginia where fracking is a booming business. This is the second of five periodic reports from RAND that track workforce, economic, and educational trends (the previous one is available here). The reports are commissioned by the Appalachian Partnership Initiative, whose aim is “to build the pool of local workers for jobs in the energy and advanced manufacturing sectors” across the tristate region.
This paper uses Census Bureau statistics to highlight a few key workforce trends:
In short, oil and gas development has brought decent paying jobs for local workers of varying backgrounds. But can the region continue to meet employers’ demands over the long haul, especially for jobs that require higher levels of education? Not just on oil rigs, but also environmental engineers, surveyors, market analysts, land agents, and the other careers that support this industry? To look into this question, the report examines indicators from K–12 and higher education.
The analysts report eighth-grade NAEP science and math data from 2015. While they cannot isolate the NAEP data for students specifically from this region—they are reported at the statewide level—proficiency on these exams remains troubling. In Ohio, for example, just 35 percent of eighth-graders reached proficiency in math and 38 percent in science. While slightly above the national averages, these data suggest that too few students are on track for post-secondary success in rigorous STEM majors, should they want to pursue that pathway. For a clearer depiction of achievement in this region, future RAND analyses should dig into the state-level data.
Speaking of higher education, the report also offers data from the post-secondary institutions located in these twenty-seven counties. They find that less than 20 percent of college students seek degrees in STEM fields, including math, engineering, or chemistry. With respect to college completion (in any major), the researchers find that at four-year universities, just three in five students graduate. The completion rates nose-dive to a mere 12 percent at two-year institutions.
This report reminds us that the oil and gas industries offer well-paying jobs to capable workers, including those who do not have higher education. Yet the working age population is declining in many parts of the region, and as the industry grows, employers will continue to need workers with a wide range of skills and abilities—from roughnecks and mechanics to geologists and engineers. In order for the region to maintain a full pipeline of talent, K–12 and higher education will need to equip young people with the abilities needed to excel in all types of careers in this industry.
SOURCE: Gabriella C. Gonzalez et al., “Wages, Employment, and STEM Education in Ohio, Pennsylvania, and West Virginia,” RAND Corporation (2017).
About 95 percent of public school districts pay teachers according to years of experience and degrees earned—a traditional “step and lane” salary schedule. The other 5 percent have captured a great deal of attention, “spurring rapid growth in the number of research studies” and prompting this meta-analysis of the merit pay literature. Researchers at Vanderbilt pulled data from a few dozen merit pay studies to determine the answers to two primary questions: Do performance-pay programs have an impact on student test scores? And to what extent does program design matter—e.g., individual incentives versus group incentives?
The studies chosen for the meta-analysis went through a rigorous selection process. Analysts reviewed almost 20,000 records via social science databases like ERIC or NBER, ultimately choosing forty-four studies on teacher merit pay in the U.S. and internationally. Almost half were from peer-reviewed publications; all of them met standards for sound research design. Twenty-five percent were randomized control trials; the remainder were quasi-experimental designs. The studies came from a twenty-seven-year period (1989–2016), with most of them occurring after 2005 and with an average treatment effect of four years—in other words, the individual performance pay program under study was on average four years old.
The studies measured the impact of performance pay in the form of gifts, one-time bonuses, and permanent salary increases ranging from $26 to $20,000 in U.S. dollars, with the oddly specific former amount coming from a study in a developing country. Researchers ran a number of additional tests and employed research strategies to ensure their sampling of studies was not biased. For example, they included only the most recent data in studies by the same author(s) to avoid overweighting of data that appeared in multiple similar studies.
The results are good news for merit pay proponents. The average effect of teacher participation in a merit pay program was associated with a .05 standard deviation increase in student test scores—the equivalent of about four-and-a-half weeks of additional learning. Removing the international studies, the estimated effect was still significant at .04 (three weeks of learning). When treated separately, the results for math were higher (.07 standard deviation) than for English language arts (.04), though both were still positive and significant. The researchers don’t hypothesize why merit pay programs in the U.S. had a smaller effect size. One plausible explanation could be that the motivational forces at work under merit pay programs are stronger in countries where teacher pay is lower and where bonuses could represent a much higher percentage of overall pay.
The researchers also found that program design mattered. Teacher pay programs with group incentives (for example, rewards for teams of teachers, or those given at the school level) produced effects that were about twice the size of the average effect in the meta-analysis. Those with on-the-job training components—a requirement for the federal Teacher Incentive Fund—did see learning gains, but ones that weren’t any different from the average effect of all studies in the meta-analysis. In other words, schools might skip teacher training and just invest their marginal dollars in higher pay.
The report authors briefly discuss the theoretical basis for merit pay, noting that it can have a positive impact on student outcomes not just by changing teacher behavior and/or motivation but also because it affects who joins or stays in the profession. Six of the studies included in the meta-analysis found that performance pay had positive effects on teacher recruitment and retention. But this particular analysis can’t really answer the most burning questions it raises, like whether the overall positive effects on student test scores came from reducing teacher turnover, increased teacher effort, or some combination of those (or other) factors. Nor can it help us grapple with basic questions regarding how states should evaluate teachers or define their effectiveness. Finally, the meta-analysis is somewhat lacking in rigor given that the majority of the included studies are quasi-experimental rather than “gold star” randomized research designs. The latter such studies on teacher merit pay have in several instances found no or fleeting effects.
Even so, it’s a worthwhile read and could be especially useful to districts that are backing away from teacher performance pay even as the data shows that it could be good for kids.
SOURCE: Lam D. Pham, Tuan D. Nguyen, and Matthew G. Springer, “Teacher Merit Pay and Student Test Scores: A Meta-Analysis,” Vanderbilt University (April 2017).
A new publication by Advance CTE touts parental and student satisfaction with Career and Technical Education (CTE) programs.
The short six-page report describes the findings of a March 2017 survey that asked 252 ninth through twelfth grade CTE students and their parents for their opinions about this pathway. For comparison, the authors asked similar questions to a group of 514 “prospective” students and their parents who “demonstrat[ed] some degree of interest after hearing a brief description of CTE”.
Perhaps unsurprisingly, the survey found that CTE students and parents love CTE. Students reported 88 percent overall satisfaction, while 96 percent of their parents responded favorably. In comparison, only 76 percent of prospective students were satisfied with their overall school experience, while 79 percent of their parents liked their kid’s schooling. Statistically significant differences persist down the list of questions, nearly all showing that CTE students and parents were the more satisfied group.
I suspect these findings may be due, in part, to the prospective group feeling like the grass is greener on the other side. After all, they took the survey after being told about CTE by a CTE advocacy organization. Additionally, the authors admit that while their sample is “representative” of the population, the results are “not generalizable to all adult Americans.” In short, take this with a grain of salt. Opponents of CTE could easily paint this study as a red herring, arguing that satisfaction is not the same as success when it comes to preparing our nation’s children for college and career.
Before discounting this work, however, notice one key finding: “nearly eight out of ten CTE students plan to attend college, including 62 percent who plan to attain a bachelor’s degree or higher, which are incredibly consistent with prospective student’s attainment goals.” In other words, CTE programs do not track students away from college. This should comfort skeptics, who worry that CTE relegates students to lesser ambitions.
Personally, CTE supplemented my higher education experience by helping me test out a possible career option. My training as an emergency medical technician (EMT) allowed me to take a few steps down a path that led many of my peers to become medical doctors. While I changed course towards research, I still see my time as a volunteer EMT as one of the best parts of college, in part because it informed my career choice. This Advance CTE publication does not ask what these students plan to do in college, but if my experience is any indication, their decisions will undoubtedly be influenced, for the better, by their hands-on experience in CTE.
SOURCE: “The Value and Promise of Career Technical Education: Results from a National Survey of Parents and Students,” Advance CTE (April 2017).