For those of you following the interesting and ever-changing world of educator evaluations, a few recent happenings may be worth a look.
The Delaware Department of Education recently published a report written by its internal Teacher and Leader Effective Unit on the implementation of its revised educator-evaluation system, DPAS-II. As part of the state’s Race to the Top grant, the DDE incorporated “robust measures of student achievement” into its existing system. That model had been structured around four components of effective teaching measured through classroom observations. It had produced almost no variation among the state’s educators (in the new system, those observations still make up 80 percent of the final rating).
Most components of DPAS-II get a binary rating, but a new component, based on multiple measures of student growth, is scored “Exceeds,” “Satisfactory,” or “Unsatisfactory.” The summative evaluation combines the components, netting ratings “Highly Effective,” “Effective,” “Needs Improvement,” and “Ineffective.”
The state’s educators were divided into three groups based on the availability of student-performance measures; these include state tests, external and internal assessments in subjects outside of math/reading, and “growth goals” based on professional standards and position responsibilities.
Importantly, those teachers whose scores were determined (at least in part) on the basis of empirical measures of student growth had more score variation (54 percent receiving “Exceeds”) than those assessed via growth goals based on professional standards (69 percent receiving “Exceeds”). Moreover, the distribution of scores associated with student-performance measures varied widely among districts, making inter-district comparisons difficult.
Changes to the system may be afoot—and not necessarily those that would solve these problems. Teachers may be provided advance notice of classroom observations, and peer evaluations may be allowed. A smarter approach would be to address observations, which continue to rate almost all teachers as satisfactory. (Along those lines, check out TNTP’s smart new report, “Fixing Classroom Observations.”)
Just as Delaware was releasing its report, Fordham’s hosted a panel discussion, Traversing the Teacher-Evaluation Terrain, with four experts to discuss developments in the world of educator-evaluation reform: Sandi Jacobs of NCTQ, Alice Johnson Cain of Teach Plus, Chet Linton of the School Improvement Network, and Rob Weil of the AFT.
NCTQ recently found that today, only 10 states lack clear policies around including student achievement in teacher evaluations. Thirty-six states require student achievement to be a “significant” factor; 12 have a single statewide system; about half provide an optional model or guidelines for district-level tailoring; and the remaining states have a “presumptive model” (though districts can petition the state for approval to use a different version). Among these systems, there is great variety in how numerous issues are handled, including observations, student/parent/peer surveys, how results influence tenure and/or compensation decisions, and so on.
Research by the School Improvement Network found that within states, there’s a lack of understanding about the degree of flexibility afforded to districts. Possible factors include vague statutory language, dense guidance documents, and failure by LEAs to thoroughly review state policy.
The AFT’s Weil expressed strong concern that the goals of evaluation reform—improving teacher practice and student learning—have gotten lost in the technicalities of developing algorithms and rubrics and the speed with which these systems are being implemented. Weil noted that while states and districts continue to refine their systems, in most cases consequences tied to these systems, including the dismissal of low-performing teachers, have been in place since day one. This is an issue, agreed Johnson Cain of Teach Plus, but inaction too has consequences, namely for students who find themselves in the classrooms of ineffective teachers.
The panel and DE’s study surface the same high-level issue. Yes, we’ve taken care of policy in lots of places, but implementation is the major challenge. We’re, of course, seeing the same thing in the world of Common Core implementation.
I wonder whether, 20 years from now, we’ll marvel at the huge statutory and regulatory changes that happened during this era or mourn that implementation went so sideways.
Or both.