Want great teachers and higher achievement? A study from Wisconsin suggests trying flexible pay.
The vast majority of Ohio teachers are paid according to salary schedules that reward seniority and degrees earned, the result of state l
The vast majority of Ohio teachers are paid according to salary schedules that reward seniority and degrees earned, the result of state l
The vast majority of Ohio teachers are paid according to salary schedules that reward seniority and degrees earned, the result of state laws that require school districts to follow this rigid compensation scheme. Unfortunately, this method fails to acknowledge other factors that legitimately should influence teachers’ wages, including their classroom effectiveness, professional responsibilities, or demand for their labor.
But what if these constraints were loosened, so that school leaders could pay teachers in a more flexible way? A recent study by Yale University’s Barbara Biasi looks at what happened in Wisconsin after lawmakers passed reforms via Act 10 in 2011 that allowed districts to ditch the traditional salary schedule and adopt flexible pay policies. Roughly half of Wisconsin’s districts leveraged these new autonomies to negotiate salaries with each employee, much like many businesses do. The other half maintained a traditional seniority- and credentials-based salary schedule that applied to all teachers.
Her analysis reveals several eye-opening findings about the Wisconsin districts that switched to a flexible pay structure. These districts:
Unlike Wisconsin, Ohio still prevents district leaders from experimenting with different compensation strategies. Even if they want to adjust salaries to attract and retain talented teachers or fill hard-to-staff positions, they remain locked into structures that reward seniority and credentials. Yes, the politics of reform will be difficult to navigate for Ohio lawmakers. But the evidence, not to mention commonsense, points to giving schools more autonomy and flexibility in determining teacher pay. Some districts may stick with what they know—the single salary schedule—but as the Wisconsin example shows, others will use their newfound discretion to gain an edge in the labor market and drive higher student learning. Why stand in their way?
This November, Americans will cast their votes in thousands of local school board races. The stakes couldn’t be higher. These governing bodies will decide how public schools handle hot-button issues like masking requirements and critical race theory. They’ll also be overseeing efforts to ensure that kids get back on track academically, not to mention carrying out usual board responsibilities such as crafting budgets and adopting curricula.
Given the important role of school boards, voters should have the information needed to select candidates that reflect their ideals. In most elections, citizens can rely on the party labels that appear on ballots to help guide their choices—even if they don’t know much about particular candidates. Though imperfect, these partisan ballots facilitate more informed voting than asking citizens to do background research on every candidate seeking office.
Yet school board elections, along with some other municipal races, do not follow this norm. Under Ohio law, for example, school boards are technically “nonpartisan” in the sense that party affiliations are omitted from the ballot. Only candidate names appear. For many voters, a nonpartisan ballot is of no help in the selection process. The last time I voted in a school board race, I didn’t recognize any of the names, nor did I have a good sense of where they stood on education issues. Others have probably felt the same unease, as if voting in school board races is no more than guesswork.
Beyond my own frustrations, a small body of research suggests concerning consequences of nonpartisan elections. Without the help of party labels, one study finds that voters were more likely to skip the nonpartisan races on a ballot, even though they cast votes in the elections with party labels—what scholars call voter “roll off.” Other studies indicate that voters, without a partisan cue, rely instead on the likely ethnicity or gender of the candidate. Another analysis finds that nonpartisan elections depress voter turnout, which in the case of Ohio’s school board elections are already pitifully low due to their “off cycle” schedule that doesn’t align with national votes. Recognizing drawbacks such as these, groups on both sides of the political spectrum have advocated for partisan ballots in local elections.
Moreover, school board elections are also an important form of local accountability and oversight. When citizens are unhappy with the district, they can always voice their dissatisfaction at the ballot box. Yet nonpartisan elections likely weaken accountability because voters don’t know which party is in power and who deserves the boot for acting contrary to their interests. As political scientist Charles Adrian theorized many years ago, nonpartisan elections “tend to frustrate protest voting” as people cannot easily identify which candidates belong to the “in” or “out” group. In other words, it’s hard to shake up the status quo when you can’t figure out who’s part of it.
Including party labels in school board elections seems like a commonsense reform that would give voters more information, while also potentially increasing participation and enhancing local accountability. But such a change would be an uphill battle. For one, there’s inertia. Since the early 1900s, the vast majority (though not all) of school board races in the U.S. have been nonpartisan. Moving to a partisan ballot would certainly challenge longstanding tradition. Partisan elections could also spark criticisms that they create unnecessary political rancor, perhaps the kind we see at a national level. One might argue, for example, that there’s no Democratic or Republican way to run a school.
But that sentiment ignores real differences in opinion about how best to govern schools and tackle educational challenges. For instance, according to EdNext polling from 2020, Democrats express more positive views toward increased school spending and the role of teachers unions, while Republicans voice more support for school choice and merit pay policies. More recently, polling reveals significant partisan splits over issues of school reopenings and critical race theory. Anecdotally, the Cato Institute’s “public schooling battle map” documents hundreds of cases where people’s values and beliefs have come into conflict. In sum, school boards regularly make decisions where political attitudes come into play—things like whether to seek tax increases, how to negotiate union contracts, and what type of relations the district has with schools of choice. Those aren’t merely technocratic issues.
All this raises some important questions. Are nonpartisan elections really insulating public schools from divisive politics? Or is it naïve to think that school boards are apolitical governing bodies? If indeed there are ideological differences about how to run schools and educate children, shouldn’t the electorate get a hint about where candidates are likely to stand? Why keep it a secret?
In democracies, citizens work to resolve their differences through elections and the political process. Rather than suppressing differences under the illusion of “nonpartisanship,” moving to more transparent school board elections might just make for a healthier democracy and more responsive public schools.
During summer 2012, Governor Kasich signed House Bill 525, legislation that allowed the Cleveland Metropolitan School District (CMSD) to implement a city-wide school turnaround plan. Developed by city and community leaders, the Cleveland Plan outlined strategies that sought to raise student achievement, increase parental engagement, encourage the growth of quality public school options, and provide school leaders with the autonomy and resources needed to improve outcomes.
It’s been nearly a decade since the city embarked on this set of ambitious initiatives. But is it working? According to a recent study published by the Council of the Great City Schools, the answer is yes. The report analyzes student-level data from the National Assessment of Educational Progress (NAEP) and finds that Cleveland showed statistically significant positive effects in fourth and eighth grade math and eighth grade reading. The city also showed significant improvement over time. In fourth grade math, for instance, Cleveland moved from a negative impact on student learning in 2009 to a positive one in 2019.
These impressive gains earned Cleveland the moniker as one of the fastest-improving city school districts in the country. But what led to such notable progress? Successful school improvement is complex, and it’s difficult to tie gains to specific interventions or strategies. But here are three critical areas where Cleveland is doing things right.
1. Stable and empowered leadership
In many big cities, leaders come and go, and reform efforts often change with them. That’s not the case in Cleveland, where Eric Gordon has been the district’s Chief Executive Officer since 2011. Gordon has not only been at the district’s helm during the entirety of the Cleveland Plan’s implementation, he helped lobby state legislators to get the plan passed in the first place. He’s also a member of Chiefs for Change, a network of state and district education leaders that advocate for policies that promote high standards, quality school choice, and effective teachers—strategies that have been shown to improve student outcomes.
School improvement isn’t just about the head honcho though. It’s also important to empower school leaders. In fact, that was one of the Cleveland Plan’s four key strategies—transferring authority and resources to individual schools and their leaders. One of the ways CMSD accomplished this was to transition to a student-based budgeting formula. Under this system, resources are allocated based on projected enrollment and student characteristics. For example, special education students, English learners, and students who are below or above proficient in reading (based on test scores from third and eighth grade) all receive increased funding amounts. CMSD principals receive their budget allocation for the following year in early February and then use this information to design a customized budget plan based on their school’s academic goals. This increased autonomy gives principals the authority and funding to meet the unique needs of their students—which is crucial to school improvement efforts.
2. Accountability and transparency
Accountability for student outcomes was cited as a crucial strategy for cities that showed substantial growth in the council’s report. Cleveland is no exception. The Cleveland Plan set an accountability tone from the start by calling for the creation of the Cleveland Transformation Alliance (CTA), a nonprofit responsible for supporting implementation of the plan and holding schools accountable. Each year, the CTA releases a report that measures the district’s progress against a broad set of goals. The most recent report lists plenty of positives as well as some areas for growth. But it’s the fact that an annual report exists at all—especially one that examines measures like high-quality preschool enrollment, participation in AP and College Credit Plus, state report card ratings, remediation rates, and college access and persistence—that’s crucial. All this information is available for other districts too, of course. But in Cleveland, data is gathered and published each year in a single document that’s discussed by city and district leaders, easily accessible to the general public, and allows parents to keep tabs on improvements.
3. School choice
CMSD is unique in Ohio because it’s a portfolio district. Families use the district’s school choice portal to select a school for their child rather than being required to enroll at the closest building. They can choose up to five schools, ranking them in order of preference. And while there are limitations—some schools have acceptance criteria or don’t have enough seats for all the students who apply—every school is open to every student.
Charter schools have also contributed to the city’s success. Research shows that a higher charter market share in urban areas is associated with significant achievement gains for Black and Hispanic students, and that the existence of charters doesn’t have a negative effect on district schools. That’s playing out in Cleveland, where CMSD sponsors nine charters and has partnered with an additional eight. These seventeen schools adhere to the Cleveland Plan the same way district schools are required to, and they’re eligible to receive a portion of funding from district levies. Many of these charter schools are part of Breakthrough Public Schools, one of Ohio’s best charter networks. And Cleveland became a Gates Compact city in 2015, which means the city received grant funding to support collaboration efforts between the district and charter schools.
But all that means nothing if families aren’t aware of their choices. City leaders seem to know that, and are working to improve school choice outreach to families. The most recent report from the CTA outlines results from the Family School Choice Listening Campaign, which was conducted during the summer of 2020 and included focus groups, a forum, and a brief survey. Findings from the survey indicate that when it comes to choosing a school, Cleveland parents put the highest priority on safety, programs, location, and teacher quality. They’re also eager for better communication, such as a website that clearly outlines information about available programs, deadlines, and who to call when they have questions. The CTA report also notes the relaunch of the Ambassador Program, an initiative that provides training about school choice to staff at partnering organizations throughout the city. Training topics include the School Quality Guide and the district’s choice website, myCLEschool.org. As of October, the CTA had trained 125 people from fourteen different organizations.
***
Cleveland hasn’t reached the promised land just yet. Proficiency rates are still too low, especially for disadvantaged populations, and the pandemic seems to have exacerbated existing gaps. But after nearly a decade of sustained reform efforts, the city’s schools are heading in the right direction. Moreover, for other cities seeking school improvement strategies, the Cleveland approach might just be a model to emulate.
At its simplest, the belief gap is the gulf between what students can accomplish and what others—particularly teachers—believe they can achieve. It is especially pernicious when beliefs around academic competency are fueled by extraneous information such as socioeconomic status, race, or gender. All too often, the assumption of low academic ability on the part of adults becomes actual underachievement in young people. A new study looks at one simple possibility to mitigate extraneous information and remove the assumptions: using demonstrated academic ability.
The data on which the belief gap analysis is based was collected during a separate study on the efficacy of an online student evaluation platform called Assessment-to-Instruction (A2i) in several elementary grades. A2i uses regular, ongoing student assessment to not only track a student’s progress through a literacy curriculum, but also to help guide teachers as to what and how much additional work students need to reach competency. The belief gap study, conducted by researchers from the University of California, Irvine and Texas A&M University, looked at the effects of both the assessment data and the professional development (PD) around it on teachers’ perceptions of student ability.
The A2i study took place in an unnamed district in northern Florida in the 2008–09 school year, in five elementary schools ranging from urban to rural in setting. The belief gap researchers focused on a subset of the participants—twenty-eight teachers and 446 of their first-grade students. Students were representative of the district community: 84 percent were White, 6 percent were multiracial, 5 percent were Black, 3 percent were Hispanic, 2 percent were Asian, and 0.7 percent were Native American. Approximately 46 percent of the students were boys, and 27 percent of the students qualified for the National School Lunch Program (NSLP). All teachers were female, with an average of seventeen years of teaching experience. One teacher identified as Black; the rest identified as White. Fifteen teachers and their 255 students were randomly assigned into the A2i treatment group, while thirteen teachers and their 214 students were assigned to the “control” group.
It’s important to note that, due to the construct of the main A2i study, a pure control group was not possible for the belief gap analysis. Both groups of teachers received the same amount of PD regarding research-based teaching, but the focus varied between groups. The treatment group was focused on why and how to use A2i assessment data to tailor their instruction; the control group received more generalized PD on the potential value of any assessment-guided instruction. Teachers in the control group delivered business-as-usual instruction during their literacy block and implemented a research-based intervention called Math PALS for their mathematics class periods. They received infrequent assessment data for their students but were not asked to tailor their instruction based on that data. The treatment group teachers used Math PALS, too, but utilized the frequent, dynamic assessment feedback from A2i to guide and shape their literacy instruction.
At the midpoint of the school year, teachers completed the Social Skills Rating System (SSRS) for all of the first graders in the study. SSRS is a norm-referenced, multirater assessment tool comprised of fifty-seven items in three measurement areas, academic competence, problem behaviors, and social skills. The researchers hypothesized that teachers using the frequent assessment feedback from A2i for the first half of the year (and exposed to the A2i-specific PD) would produce more accurate predictions of student competence than their control group peers, and that potential biases in predictions based on student characteristics would be minimized.
Generally, this hypothesis proved correct. Teachers in the treatment group provided a more accurate rating of their students’ academic competence than their control group peers by choosing ratings that agreed with student test scores. Control group teachers—those without access to the A2i assessment data—generally rated the overall academic competence of their students lower, and rated students who qualified for the NSLP as less academically-competent than more affluent students. The strength of this effect varied based on the percentage of NLSP students attending a given school. The fewer the number of NSLP students in the school, the lower control group teachers’ ratings of those students were. Interestingly, teachers’ perception of students’ social skills and behavior problems appeared impervious to the treatment. Teachers in both groups who rated students’ behavior or social skills as poor also predicted lower academic competence for those students.
Students in the A2i classrooms achieved greater gains in test scores between fall and spring than students in the control classrooms, which likely speaks more to the primary study of A2i’s effectiveness. However, teacher ratings of academic competence were positively and significantly correlated to higher test scores in both literacy and math. For example, for every one-point increase in a teacher’s rating of academic competence, their student’s score on reading comprehension increased by 0.24 points. Thus, while it would be something of a leap to assert that a high competency rating directly results in higher test scores, there is clearly an interaction.
To the extent that teacher ratings are influenced by student and classroom characteristics unrelated to their actual performance—often negatively—any successful effort to mitigate that influence should yield positive outcomes for students. Teachers participating in PD on data-driven personalized instruction were significantly more accurate in their competency judgments regardless of socioeconomic status and other non-academic characteristics. Filtering out the noise is a great first step to eliminating the belief gap.
SOURCE: Brandy Gatlin-Nash, et. al., “Using Assessment to Improve the Accuracy of Teachers’ Perceptions of Students’ Academic Competence,” The Elementary School Journal (June 2021).
A recently released report by the Council of the Great City Schools seeks to determine whether urban public schools—including charters—are succeeding in their efforts to mitigate the effects of poverty and other educational barriers.
To conduct the analysis, the council used student-level data from administrations of the National Assessment of Educational Progress (NAEP) from 2009 through 2019 in math and reading for grades four and eight. The analysis compares two mutually exclusive and non-overlapping groups: Large City Public Schools (LCPS) and All Other Schools (AOS), a category that includes both public and private schools. Both groups include charters.
In terms of demographics, the makeup of students in large city schools is substantially different than that of all other schools. The population of LCPS was more predominantly Black and Hispanic, and these schools were more likely to have higher numbers of students who were eligible for free or reduced-price lunch or were identified as English learners. In fact, students in large city schools were approximately 50 percent more likely to be poor, twice as likely to be English learners, and twice as likely to be Black or Hispanic. Large city schools also tended to have a higher percentage of students whose parents didn’t finish high school.
The council begins its analysis by comparing actual NAEP performance levels—unadjusted scale scores—for large city schools and all other schools. Given the demographic differences outlined above, large city schools unsurprisingly scored below all other schools on every NAEP administration between 2009 and 2019. However, LCPS improved their performance faster than AOS. In fact, gaps between the nation’s urban schools and all other schools narrowed by between one-third and nearly one-half during this time period, depending on grade and subject level.
Next, the council compared large city schools and all other schools using adjusted scale scores. They controlled for a laundry list of demographic variables, allowing them to statistically predict expected results and compare those to actual NAEP results. The difference between actual and predicted scores was dubbed “the district effect,” and was used to identify which urban school districts produced enough “educational torque” to mitigate poverty and other barriers. (Keep in mind that, despite the nomenclature, the analysis includes charter schools.)
The results are clearly in the favor of large city schools. Their effects were larger than expected for every NAEP administration in the study’s timeframe with the exception of eighth grade reading in 2011 and 2013. AOS did show significant gains in district effects between 2009 and 2019. But LCPS boasted district effects that, when compared to AOS, were 1.8 times greater in fourth grade reading, 5.6 times greater in eighth grade reading, 2.5 times greater in fourth grade math, and 3.6 times greater in eighth grade math. These large effects are likely what’s helped narrow the gap in unadjusted scale scores between the two groups.
The council also examined data from the Trial Urban District Assessment (TUDA), a voluntary initiative that over-samples students in participating NAEP districts to obtain district-level estimates of reading and math performance. These data allowed researchers to examine city-specific results by looking at the district effect of TUDA participants. Charter schools are included in these findings when TUDA samples incorporated them, but they were excluded for districts where charters are independent and therefore not counted in the district’s scores. The council examined the 2019 fourth grade results of twenty-seven jurisdictions and found that seventeen showed statistically significant positive effects in math and fifteen did so in reading. There were only twenty-six jurisdictions in the eighth-grade analysis (student questionnaire data wasn’t collected in Denver, so adjusted NAEP scores weren’t available), but the results were similar. In eighth grade math, fifteen locales had positive district effects compared to nine cities with those effects in reading.
Five jurisdictions registered positive effects in all four test areas: Atlanta, Boston, Hillsborough County, Miami-Dade County, and Chicago. Another nine, including Cleveland, demonstrated positives in three of the four areas. Some locales are also exhibiting impressive growth over time. In fourth grade math, for instance, there are three places that went from negative impacts in 2009 to positives in 2019: Chicago, Cleveland, and the District of Columbia.
To better understand how jurisdictions improved, the council visited six that showed substantial progress between 2009 and 2019—Boston, Chicago, Dallas, the District of Columbia, Miami-Dade County, and San Diego. These places took different approaches to reform, but shared several common features such as strong and stable leadership, accountability and collaboration, intentional support for struggling schools and students, and community investment and engagement. For instance, Dallas implemented an initiative called Accelerating Campus Excellence, which identified historically failing schools and provided them with prescriptive and data-driven instructional practices, schoolwide systems for social and emotional learning, extended learning time, and classroom upgrades. Miami-Dade, meanwhile, paired its support for struggling schools with school choice initiatives. And while the report doesn’t spend much time discussing school choice, it’s worth noting that a vibrant school choice sector is something many of these fast-improving places have in common.
Overall, these data should be encouraging for big-city reformers. It shows that urban public schools are outperforming expectations on NAEP, and that they “seem to be doing a better job than other schools at dampening the effects of poverty, English language proficiency, and other factors that often limit student outcomes.” Several TUDA districts, such as Cleveland, Chicago, and the District of Columbia, showed impressive improvement between 2009 and 2019. And though the council doesn’t explicitly mention school choice as a factor, many of the jurisdictions they identified as models for improvement have vibrant school choice sectors. Obviously, there is no silver bullet. But the most successful cities appear to have several strategies in common, and these strategies should definitely be part of any reform efforts going forward.
SOURCE: Michael Casserly, Ray Hart, Amanda Corcoran, Moses Palacios, Renata Lyons, Eric Vignola, Ricki Price-Baugh, Robin Hall, and Denise Walston, “Mirrors or Windows: How Well Do Large City Public Schools Overcome The Effects Of Poverty And Other Barriers?” Council of the Great City Schools (June 2021).