Relaxing licensure requirements for new teachers is one of many proposals being floated in order combat teacher shortages and diversify the pipeline. It’s true that licensure can be a labyrinthine process, and parts of it may be ripe for reform, but policymakers must take care not to throw out what’s working even under duress. A new paper from the American Institutes for Research (AIR) suggests that subject matter testing, at least as practiced in Massachusetts, is one of those non-negotiable parts of the process, accurately predicting a novice teacher’s abilities to deliver quality education for students.
The Massachusetts Tests for Educator Licensure (MTELs) have been administered to prospective teachers in the Bay State since 1998. To become a prekindergarten–12 teacher, applicants are required to pass at least three MTELs. These include two Communication and Literacy Skills Tests (CLST)—one in reading, one in writing—and at least one additional academic subject test. The academic subject tests generally cover broad grade bands, but contain subareas that correspond to specific content areas and are weighted to determine the number of questions, as well as the aggregation of scores. Candidates pay approximately $110–$140 to take each test and can retake any exams they fail an unlimited number of times. Currently, there are thirty-six MTELs aligned to state regulations governing subject matter knowledge expectations for teachers and current curriculum frameworks. The state also convenes a bias review committee to make sure that test content and language don’t disadvantage certain populations of teacher candidates.
The analysts look at more than 150,000 MTEL scores from 1998 to 2019, stopping before pandemic-era changes to licensure were enacted. They combine these data with employment, classroom assignment records, and performance evaluations from the state’s Education Personnel Information Management System (EPIMS). For teachers in grades 4–8 between 2012 and 2019, they have data to link their licensure test scores to their students’ achievement in math and ELA, allowing for two effectiveness measures—value-added and summative student achievement—to be connected to particular teachers.
Consistent with research in other states, the analysts find that teachers’ MTEL scores are related to both their value-added contributions to student growth and their students’ summative performance ratings. In general, the higher the score on the licensure test, the more effective the teacher is in driving student scores on MCAS, PARCC, and annual end-of-grade tests higher. For math and ELA teachers in grades 4–8, a one-standard-deviation increase in average MTEL scores is associated with an increase of about 0.005–0.014 standard deviations in students’ test performance. These are small but statistically-significant effects, and are stronger in math than ELA.
The analysts do not issue recommendations, but the detailed picture provided seems valuable for policymakers in Massachusetts—and beyond. The MTEL testing structure appears adequate to the task of screening potential teachers, and any changes contemplated in pursuit of a wider pipeline of future educators should hold the line on testing subject matter knowledge, cut scores, and the like. But that doesn’t mean that other changes to the complex process—such as reducing the cost of tests or better supporting candidates struggling to pass these exams—can’t be part of a solution to get more teachers into classrooms or to diversify the pool of candidates.
SOURCE: James Cowan et al., “Assessing Licensure Test Performance and Predictive Validity for Different Teacher Subgroups,” American Educational Research Journal (December 2023).