I just heaved a big sigh reading Jay Mathews headline today: "Merit Pay Could Ruin Teacher Teamwork." As a former evaluator of a Teacher Incentive Fund state program, I spent quite a bit of time researching PBPs, including how these programs affect educators in urban school districts. One of the biggest problems, as I see it, is not that these programs "ruin teacher teamwork," but that their payout structure cannot be accurately and succinctly explained to teachers and administrators. And if you don't do that well, confusion breeds mistrust, and the take-away message is typically Mathew's headline.
Take, for instance, the Teacher Advancement Program or TAP. It's one of the more popular PBPs. It's primarily a data-driven professional development program. Research on TAP is still somewhat premature, but promising. However, try to explain the TAP payout plan to teachers and you'll encounter many a blank stare. That's because the TAP payout, for tested areas that is, comprises three separate elements: school wide performance (20%), classroom-level performance (30%), and demonstration of skills, knowledge, and responsibilities (50%). Regarding the first, there are typically five levels of school-wide performance, each based on the number of standard errors above or below the above average school gain in the state (or representative sample). Locales participating in TAP decide what percentage of the maximum school-level payout they'll distribute based on the particular school's level of performance (e.g., a level 4--which is 1 standard error above the average school gain in the state--might mean a 75% payout; a level 3, 50%).
Have I lost you yet? Hope not, because I haven't yet gotten to how classroom-level performance is assessed. Suffice it to say that payouts are calculated differently if a teacher teaches a tested or a non-tested subject and could also be affected by the number of other teachers in one's particular payout "pool." ??The payout for the third piece-teacher skills and knowledge--is based on the summary scores teachers receive on an instructional rubric. Particular scores equate to higher or lower payouts; the scoring bars for master and mentor lead teachers, however, are higher.
To be honest, I've barely scratched the TAP payout-calculation surface (and particular methods differ by state and district). I say all of this simply to point out that the more politically feasible PBP programs are complicated in terms of how monetary rewards are calculated (and ultimately disseminated). If done well, they reward both teacher teamwork AND individual performance. Moreover, qualitative data indicate that teachers in these programs appreciate this multi-pronged effort.
The devil's in the communication details, though. Teachers get suspicious when their superiors can't explain with complete transparency how these rewards will be calculated. But neither an administrator nor a statistician can best explain these methods to teachers (and the combo is hard to come by). ??Truth be known, the methods are imperfect (that's a whole other post). But they're also the best we've got. Given all of the debate about what should and shouldn't constitute a teacher pay package, PBPs deserve serious attention and study. Let's just hope that those studying and implementing them, though, learn to explain better how they work to their would-be recipients.