A new report from Portland State University sociologist Dara Shifrer digs into the value-added data of thousands of teachers who switched schools and concludes that value-added measures reflect the socio-economic status of students and thus cannot be used to accurately assess teachers or their ability. She likewise argues that poor-performing schools can drag down the performance of previously high-flying teachers under the mass of students’ out-of-school realities. But the context provided by this analysis does not seem to support the full weight of those conclusions.
Shifrer’s study uses data on nearly 4,500 teachers in an unnamed large urban school district where the majority of students are non-White. The teachers are those in core subjects (math, reading, language arts, science, and social studies) in grades three through eight between 2007–08 and 2012–13. Teacher data comes from the district and from EVAAS, and student data comes from the district and the state education agency, aggregated at the school level. Schools are classified as high- or low-performing based on average test scores on Stanford assessments administered to third through eighth graders in math, reading, and language arts, and to fourth through eighth graders in science and social studies.
Using 2007–2008 data as a baseline, teachers are identified as working in either a high- or a low-performing school—a designation that was determined based solely on whether a school’s average test scores fell above or below the median. Teachers are placed in three categories—those who stay in the same type of school, those who switch school type in any of the three subsequent school years, and those who switch school type in either of the last two school years of the study. Teachers who switched schools more than once—in any direction—were excluded from the study.
Shifrer slices and dices the data several ways, but her headline finding is that a teacher’s value-added scores were higher in high-performing schools and lower in low-performing schools. If teachers moved from a high- to a low-performing school, their value-added scores instantly fell and never recovered. If they moved from a low- to a high-performing school, their value-added scores rose and stayed up. This tracks with previous research showing that teachers in low-poverty schools (which are generally higher performing) tend to have slightly higher value-added scores than those in high-poverty schools.
However, neither of Shifrer’s conclusions—that value-added measures are inaccurate depictions of teacher quality and that poor-performing schools are somehow “teacher proof” due to the number of low income and minority students—are supported by these findings. Rigorous previous research has shown that robust value-added models tracked over several years can accurately determine the influence of teachers on student growth, with socio-economic status and other outside factors controlled for. So what gives this time? It is, indeed, a matter of context.
“Teacher quality” isn't something that you carry around in your pocket and deploy whenever needed. If a teacher finds herself in front of a classroom whose students are a good match for her skillset, it stands to reason that she would do well. The corollary is also a reasonable assumption. We know that there are schools and teachers across the country that have boosted scores and outcomes for students living in poverty. And while the “special sauce” may vary from building to building and teacher to teacher, it is abundantly clear that poor students can grow and achieve at a high level when fully supported and truly taught by the best. Reading retention policies for the youngest students who are behind in reading and better training for their teachers are part of that essential support. We have no idea what supports are available to students or teachers in the unnamed school district under study here, but value-added scores that refuse to budge indicate none of them are adequate.
Although this report wants to “contextualize educational disparities” and to discredit teacher evaluation schema utilizing value-added data, in the end all it really seems to do is reiterate the corrosive effect of factors that have already led to generations of students receiving unacceptable levels of school and teacher quality and to reinforce the fact that highly-effective teachers are still not working where they are needed the most. And the fact that teacher quality is context-specific doesn’t mean there’s no point in evaluating teachers in the context in which they happen to teach. And any teacher evaluation plan that results in a better match between classroom context and teacher ability has performed one of its intended functions is a winning formula for both adults and kids.
SOURCE: Dara Shifrer, “Contextualizing Educational Disparities and the Evaluation of Teacher Quality,” Social Problems (November 2020).