Evaluating Teachers with Classroom Observations: Lessons Learned in Four Districts
The NCES, the NIEER, and government-funded advocacy
Newark unfriends reform?
Managing in a fishbowl
Evaluating Teachers with Classroom Observations: Lessons Learned in Four Districts
The State of Charter School Authorizers 2013
The NCES, the NIEER, and government-funded advocacy
Something unsavory is underway at the Department of Education and in the world of preschool zealotry. They seem to be merging—and in so doing, they risk the integrity of our education-data system.
The late Daniel Patrick Moynihan, my longtime mentor, was renowned for declaring (among other things), “You’re entitled to your own opinion, but you’re not entitled to your own facts.”
Well, in the matter of preschool statistics, it appears you’re not going to be able to tell the difference.
Worse, you’re going to begin to wonder whether you can trust the National Center for Education Statistics (NCES) to obtain its data from impartial sources of facts rather than hotbeds of passionate advocacy.
This was an issue a dozen years back when economist Michael Podgursky (and others) pointed out that NCES was getting its teacher-salary data from the unions—and publishing those numbers as reliable facts, which they may or may not have been. (Podgursky noted, for example, that they certainly didn’t take account of many noncash benefits that teachers also derive from their employment, such as shorter work years.)
NCES has since gathered its own data on teacher compensation (or relied on trustworthy government agencies, such as the Bureau of Labor Statistics), as it should.
But in the preschool realm, NCES has done something worse than it did with the salary data. It has not only outsourced the number gathering to a prominent interest group in the field but also allowed that interest group to add its own spin and then issued the results in the guise of a government statistical publication. And along the way, it subsidized that group’s ongoing advocacy work.
Check out The State of Preschool 2013. Issued the other day with the logos of IES (Institute of Education Sciences) and NCES, it was compiled and written by Steve Barnett and his colleagues at the National Institute for Early Education Research (NIEER). If you’ve been living in a cave and don’t know what that entity is, read their “vision statement” and ask yourself whether this looks like a neutral source of factual data.
Dig a little deeper and you will learn that a year ago, NCES entered into a sole-source contract with NIEER to “provide the annual ‘State of Preschool’ data collection, data sets, and reports developed by NIEER.” The explanation for this questionable act was that the federal agency had no other way to obtain information on state-funded preschool programs that operate outside the public-school orbit.
Why were these data suddenly important for NCES? Because of “the President’s call to expand preschool education access.” And turning this project over to NIEER, not surprisingly, garnered the ardent approval of sundry other advocacy groups, such as the New America Foundation, that agree with the president and strongly favor the expansion of publicly supported preschool programs, just like he does.
Fishy, yes—especially for a statistical agency that is supposed to be neutral in policy matters. Note that the same contract pays for NIEER to issue its own “preschool yearbook,” which is practically indistinguishable from the federal report also authored by Dr. Barnett and his team, save that the “yearbook” contains lots more spin and advocacy than the NCES publication—spin and advocacy underwritten by your tax dollar and mine, thanks to NCES.
And guess who is the featured guest at next week’s briefing on the newest NIEER yearbook (a “coincidental” couple of weeks after the federal version)? None other than our friend the U.S. secretary of education, the Honorable Arne Duncan.
The 2013 NIEER yearbook was full of exhortation and Chicken Little–style warnings about the paucity of current preschool funding and so-called “quality” (mostly based on inputs and services, not school-readiness outcomes). It will, in other words, support and advance the president’s policy agenda. And will do so with your tax dollar and mine, thanks to NCES (and, presumably, hierarchs at the Institute of Education Sciences and elsewhere at ED).
The NCES version of this yearbook isn’t quite as blatant. It’s mainly numbers—numbers chosen by NIEER, to be sure, and none of them having to do with program results, just how many three- and four-year-old kids were enrolled in state-sponsored preschool programs (as defined by NIEER) in 2012–13, how much was spent on such programs, and how those figures differed from the previous year.
But the familiar NIEER spin is there, mainly in the opening paragraph of the introduction:
Participation in preschool programs has been associated with a number of positive outcomes. Evaluating data from the 40-year follow-up to the High/Scope Perry Preschool Program Study, Belfield and his colleagues show how preschool participation by low income children relates to significant economic benefits both to the children by the time they are in their 40s and to society more generally (Belfield et al. 2006).1 Summarizing over 160 studies conducted from 1960 through 2000, Camilli et al. found that preschool had a range of shorter and longer term positive relationships to cognitive gains, progression through school, and social-emotional development (Camilli et al. 2010).
No, there’s nothing new there. It’s what the preschool “juggernaut” has been asserting for years. Doesn’t matter whether Perry Preschool data—based on high-cost “boutique” programs for exceptionally disadvantaged toddlers—have any bearing on what passes for statewide preschool programs today. Doesn’t say anything about how many of the latter are skimpy offerings for four-year-olds (including many whose families don’t need state subsidies) with little or no curriculum and scant evidence of learning outcomes. Doesn’t say anything about whether whatever short-term gains they manage to produce are sustained and amplified by the elementary schools these youngsters then enter.
But it’s spin, remember, spin intended to advance a policy agenda: the juggernaut’s agenda and the president’s. It’s well and good for advocacy groups to advance agendas. But it’s questionable when the government pays for that advocacy. And it’s downright reprehensible when a federal statistical agency enables it—and then represents the fruits of it as its own.
In this brave new world of federal data, it seems, you’re entitled to your own facts, too.
A version of this article originally appeared on the Flypaper blog.
Newark unfriends reform?
After a rancorous mayoral race, the city of Newark has elected Ras Baraka—a decision surely to hold repercussions for the city’s ambitious education reform agenda. With perfect timing, the New Yorker this week published a clear-eyed long-form article on the history of these reforms, spearheaded by former mayor Cory Booker and governor Chris Christie and funded (in part) by Mark Zuckerberg of Facebook fame. All signs indicate that these reforms will soon go the way of MySpace.
Eric Hanushek, Paul Peterson, and Ludger Woessmann published an Education Next/PEPG studythis week examining whether U.S. educational challenges are concentrated among students from less-educated families. Turns out, they aren’t. The authors found that, regardless of parent education level, advantaged and disadvantaged U.S. students both earn low scores, compared to similarly situated international peers. In fact, advantaged U.S. students (those with college-educated parents) might be doing comparatively worse. Their math proficiency rates ranked twenty-eighth out of thirty-four OECD countries; disadvantaged U.S. students (those with lower levels of parental education) fell in at twentieth. Stay tuned for next week’s Gadfly, where we’ll review this apologist-busting study in greater detail.
Earlier this week, the New Republic ran Professor Jeffrey Aaron Snyder’s strong criticism of KIPP’s character-education model, in which he argued that the charter group’s method of teaching attributes like “grit” does not actually accomplish the job—and is amoral, to boot. Snyder points out that while research on the “science of character” may be becoming increasingly cogent, there exists “no science of teaching character,” and he calls out KIPP’s “character growth card”—analogous to a report card—as being superficial in its measurements and, by avoiding measures of morality, missing the point almost entirely. KIPP deserves credit for paving the way in this nascent field, but their approach also warrants this kind of scrutiny.
Today, the National Assessment Governing Board—following its Grade 12 Nation’s Report Cardrelease last week—unveiled its first-ever analysis of what NAEP data tell us about college readiness (they had hoped to include analysis of career readiness, but NAGB chairman Dave Driscoll told Curriculum Matters that the research on that front isn’t clear). The preliminary estimates are that just 39 percent of students scored at or above the math cutoff score (163 out of 300), and 38 percent scored at or above the reading cutoff (302 out of reading)—though white students scored far closer to the cutoff (162 on average), while black and Hispanic students struggled (scoring on average 132 and 141, respectively).
Managing in a fishbowl
Mike and Nina Rees take on the federal charter-school bill that passed in the House last week, what traditional public schools can learn from charters, and the pros and cons of KIPP’s character-education model. Amber wades into teacher-evaluation research.
Amber's Research Minute
Evaluating Teachers with Classroom Observations: Lessons Learned in Four Districts by Grover J. Whitehurst, Matthew M. Chingos, and Katharine M. Lindquist, (Washington, D.C.: Brown Center on Education Policy at Brookings, May 2014).
Expanding the Education Universe: An explanation of course choice by Michael Brickman
The Fordham Institute's National Policy Director, Michael Brickman explains the benefits of course choice and the implications for students.
Evaluating Teachers with Classroom Observations: Lessons Learned in Four Districts
Everyone agrees that a good teacher makes all the difference in the world—but that’s where the agreement ends. This new report from Brookings adds to the body of research examining how to decide what makes a good teacher, specifically looking at teacher-evaluations systems in four moderate-sized urban districts in an effort to suggest ways to improve them. Analysts link individual student and teacher data—one to three years of them, from 2009 to 2012—and specifically use two consecutive years of district-assigned evaluation scores for teachers with value-added ratings. There were five key findings: First, only a small minority of the workforce (22 percent) can be evaluated using gains in test scores; the remainder of educators in nontested grades and subjects are evaluated using classroom-observation scores (which account for 40–75 percent of their ratings) and teacher-developed measures, school value added, and student feedback, among other things. Second, observation scores are fairly stable from year to year. Third, including school-value added in the teachers’ evaluations (not surprisingly) tends to bring down the scores of good teachers in bad schools and inflates the scores of weaker teachers in good schools. Fourth, teachers with initially high-performing kids receive higher observation scores on average than do teachers who have initially lower-performing kids; this finding holds when comparing observation scores of the same teacher at different points in time—meaning that this result is probably not due to better teachers getting better kids (What this means, again not surprisingly, is that it’s easier to teach a dynamite lesson that scores well against observation rubrics when your students are higher performing.) Fifth, and lastly, observations conducted by outsiders are more predictive than those conducted by school principals (MET said this, too). In the end, researchers recommend that observation scores be adjusted based on student demographics, similar to how many value-added scores are adjusted, and that school value added be scrapped or have its weight reduced as a teacher evaluation metric. |
Grover J. Whitehurst, Matthew M. Chingos, and Katharine M. Lindquist, Evaluating Teachers with Classroom Observations: Lessons Learned in Four Districts (Washington, D.C.: Brown Center on Education Policy at Brookings, May 2014). |
The State of Charter School Authorizers 2013
Every year, the National Association of Charter School Authorizers draws on survey data from half of the nation’s charter-school authorizers to assess the quality of their practices, outlining a set of twelve essential practices and scoring authorizers based on their adherence to them. In this sixth edition, the results are mixed. Most practices are adopted by at least 80 percent of authorizers, but rates of adoption have decreased in seven practices since 2012. According to the report’s authors, an influx of small, new authorizing agencies negatively diluted the numbers. Smaller authorizers (which tend to be local education agencies) scored lower on average than their larger counterparts. Some of the practices outlined by NASCA—such as having designated staff work on authorizing functions—inherently favor larger entities that can devote more resources to the job. However, this report also highlights the relative lack of explicit criteria for charter renewal, which any authorizer can adopt. Size matters, but small scale is no excuse for poor oversight. |
National Association of Charter School Authorizers, The State of Charter School Authorizers 2013 (Chicago, IL: National Association of Charter School Authorizers, May 2014). |
What kids are reading: The book-reading habits of students in American schools; Children, Teens, and Reading
How often are kids reading, and which books are they choosing? Two recent reports took a crack at finding the answers. Renaissance Learning released a comprehensive look at children’s reading habits, providing detailed information on over 9.8 million children and the 318 million books they read during the 2012–13 school year, split by grade and gender—and, in so doing, offering up hints about how the Common Core standards are already impacting the classroom. Aside from pop-culture winners like The Hunger Games (which topped the charts as the most-read book), a tellingly large number of semi-obscure works that were featured in the Common Core Appendix B (a list of text exemplars) are more popular now than they were prior to state adoption of the Core standards. The second report, from Common Sense Media, employs data gathered via large national studies to analyze specific variables that could influence a child’s reading habits. Unfortunately, dramatically fewer children read for pleasure than those surveyed in the past. Nearly a third of seventeen-year-olds read for fun every day in 1984, while that fraction dropped to one-fifth in 2012—and those whonever or hardly ever read for fun grew from one-tenth to over a quarter. If reading for pleasure is on the downswing among youngsters, what they read in school matters more than ever. |
Renaissance Learning, What kids are reading: The book-reading habits of students in American schools(Wisconsin Rapids, WI: Renaissance Learning, May 2014). Common Sense Media, Children, Teens, and Reading (San Francisco, CA: Common Sense Media, May 2014). |