A new study by Dan Goldhaber and colleagues examines whether a teacher candidate exam (“edTPA”) predicts both the likelihood of employment in the teacher workforce and of teacher effectiveness via value-added measures.
edTPA is a performance-based assessment developed by Linda Darling Hammond and other researchers at Stanford. It is administered to would-be teachers during their student teaching experience. Like the National Boards, it’s a portfolio-based assessment that includes between three and five videotaped lessons from candidates, as well as lesson plans, student work samples, evidence of student learning, and reflective commentaries written by candidates. The test assesses three areas (planning, instruction, and assessment); it includes fifteen scoring rubrics, for a maximum summative score of seventy-five; and it comprises assessments for twenty-seven different teaching fields, such as early childhood, secondary science, and special education. At present, edTPA is used in six hundred teacher education programs in forty states, and passing it is a licensure requirement in seven of them (states determine the passing score). Candidates pay $300 for the exam and can re-submit any failed tasks.
The study’s dataset comprises roughly 2,300 teacher candidates in Washington State teacher education programs who took the edTPA in the 2013–14 school year. (Results, for the most part, are from the first try.) Test data are linked to other licensure data, and for a subset of teachers who enter the workforce and teach math or reading in grades 4–8, the analysts also include their students’ test scores.
Key findings: Teachers who perform better on edTPA are more likely to be employed in Washington State’s public schools the next year. Specifically, teacher candidates who passed edTPA at the cut score are 15.2 percentage points more likely to enter the public teaching workforce compared with those who took the same test but failed it at the cut score. They also find that candidates who pass at the cut point are more effective at reading instruction. Students assigned to those passing teachers score a quarter of a standard deviation higher than do students assigned to candidates who failed edTPA, all other things being equal. Yet the test is not predictive of teacher effectiveness in math. Analysts hypothesize that its content focuses more on candidates’ writing capacity, which is more likely related to an educator’s ability to teach reading than math.
edTPA is being quickly adopted by teacher programs across the country despite scarce research linking it to outcomes. In that regard, these results are intriguing and warrant continued study. What’s less clear is whether the test, though subject-specific, prioritizes and measures content knowledge in addition to pedagogical knowledge. We know that both are important, yet teachers tend to get more of the latter in their education programs. If we need both, teachers should be suitably tested on both.
SOURCE: Dan Goldhaber, James Cowan, and Roddy Theobald, "Evaluating Prospective Teachers: Testing the Predictive Validity of the edTPA," CALDER (May 2016).