The State of Proficiency: How student proficiency rates vary across states, subjects, and grades between 2002 and 2010
The proficiency illusion remains
The proficiency illusion remains
Four years ago, Fordham and the Northwest Evaluation Association (NWEA) teamed up to produce “The Proficiency Illusion,” a seminal analysis detailing the gaping discrepancies in proficiency-rate cut scores across states, grades, and subjects. Last month, NWEA released a follow-up, adding nine states to its original analysis in math and eleven states in reading (bringing those totals to thirty-five and thirty-seven, respectively) and extending the analysis through 2010. The new results are just as striking as the old. In grade-eight math, for example, NWEA found a 52-percentile difference between the highest and lowest state cut scores. And individual states continue to make their tests much harder at some grade levels than at others—creating significant problems for AYP determinations, value-added teacher evaluations, and much else. Feel like digging in? Check out the interactive data gallery. Prepare to feel a little ill.
Sarah Durant and Michael Dahlin, “The State of Proficiency: How student proficiency rates vary across states, subjects, and grades between 2002 and 2010,” (Portland, OR: Northwest Evaluation Association, June 2011).
This report—a joint effort by RAND, Vanderbilt, and the National Center on Performance Incentives—drove the final nail into the coffin of New York City’s shaky and pricey School-Wide Performance Bonus Program. We learn from this analysis that Gotham’s foray into school-wide bonuses “did not improve student achievement at any grade level.” In fact, average math and ELA scores for participating elementary and middle schools were lower than those of the control group. (There were no effects on scores at the high school level.) To understand why, analysts queried participating teachers—ninety-two percent of whom said the program didn’t affect the way they did their jobs. That shouldn’t surprise anyone, since the bonuses amounted to only $1,500 after taxes, and were tied to higher test scores school-wide—something over which individual teachers have little control. Further, a third of teachers said they didn’t even understand the criteria for obtaining the bonus. Thorough and informative, this report should act as a warning bell for anyone looking to replicate Gotham’s poorly designed (and now defunct) program.
Julie A. Marsh, Matthew G. Springer, Daniel F. McCaffrey, Kun Yuan, Scott Epstein, Julia Koppich, Nidhi Kalra, Catherine DiMartino, and Art (Xiao) Peng, “A Big Apple for Educators: New York City’s Experiment with Schoolwide Performance Bonuses: Final Evaluation Report,” (Santa Monica, CA: RAND Corporation, 2011). |
This longitudinal study out of NYU examines the connection between “home-learning environments” and school readiness by tracking a representative sample of 1,852 low-income children at ages one, two, three, and five. (The evaluation is based on things like the number of books read to the child and maternal responsiveness to the child’s requests.) There’s much to plumb here, but one takeaway emerges: Almost 70 percent of the low-income children with consistently strong home environments (ten percent of the total group) performed at or above the national averages for students from all socioeconomic backgrounds—demonstrating the home’s gap-closing potential. Unfortunately, none of the learning environments originally diagnosed as low in quality became literacy-rich by the time the children started pre-Kindergarten, implying that some children are already falling behind (and staying behind) after their first year of life. Now if we could only figure out how to help more parents more effectively play the role of their child’s first teacher.
Click to listen to commentary on NYU's study from the Education Gadfly Show podcast |
Eileen T. Rodriguez and Catherine S. Tamis-LeMonda, “Trajectories of the Home Learning Environment Across the First 5 Years: Associations With Children’s Vocabulary and Literacy Skills at Prekindergarten,” (New York, N.Y.: New York University, July/August 2011).
Outsiders have envied, emulated, and damned D.C.’s famous teacher-evaluation system, IMPACT. But what is the insiders’ perspective? This report from Ed Sector delivers the answer. Author Susan Headden, a Pulitzer Prize-winning journalist, presents a thorough and balanced perspective on this revolutionary (but still emergent) system. She explains the core elements of IMPACT (the classroom observations, instructional buckets against which teachers are measured, etc.), and weaves a narrative that effectively captures the experience of (a sample of) observed teachers, “master educators” (the ones conducting the observations), as well as principals, union leaders, and District staff responsible for developing the system. She notes a few red flags (the distribution of IMPACT’s large performance bonuses are concentrated in already high-performing schools, for example) and details a few places where IMPACT could be improved, notably by doing more to help develop educators rather than simply reward or punish them. But progress is being made on that front. Based on our own interviews (below), we found that, overwhelmingly, teachers saw monumental improvements in professional development, and that the new system gave them specific, tangible ways to enhance instruction.
Click to listen to commentary on the D.C. IMPACT-based firings from the Education Gadfly Show podcast |
Susan Headden, “Inside IMPACT: D.C.’s Model Teacher Evaluation System,” (Washington, D.C.: Education Sector, June 2011).
Virtual schooling’s greatest power is that it creates the opportunity to reconsider what is feasible in K-12 education. Digital learning makes it possible to deliver expertise over great distances, permits instructors to specialize, allows schools to use staff in more targeted and cost-effective ways, and customizes the scope, sequence, and pacing of curriculum and instruction for individual children. These add up to facilitating the delivery of high-quality, high-impact instruction. But because it destandardizes and decentralizes educational delivery, digital education is far harder to bring under the yoke of the quality-control systems and metrics that have been put in place for traditional school structures.
To realize the potential gains in cost efficiency, customization, instructional quality, pupil engagement, and—ultimately—student learning that the digital age makes possible will require policymakers and practitioners to find new ways to monitor and police the quality of what’s being delivered, and learned. Yet absent the familiar panoply of credentials, staffing ratios, instructional hours, Carnegie units, and school days that now provide tangible assurance that a given school is “real” and legitimate, digital learning will struggle with finding acceptance—and could be bent to the advantage of those who don’t place educational achievement at the top of their priorities.
Unfortunately, it is difficult today even to visualize, much less to craft, brand-new quality-control systems that adapt perfectly to the seismic shift that digital learning represents. The best that policymakers can do at present is to select among—or combine—three basic approaches, each with its own significant limitations:
These are not mutually exclusive options, but together they comprise the basic menu of choices for policing digital learning (or any other public function). The difficulty is that these approaches were devised for assessing conventional institutions, not the more fluid networks of providers and learners created by digital instruction. In the digital world—where new tools and technologies offer dramatic opportunities to rethink teaching and learning by disassembling a school, classroom, or course into its component parts, then delivering instruction in more customized ways—these quality-control approaches will no longer be a comfortable fit for providers. Any given approach to regulating inputs, basing accountability on outcomes, or trusting markets brings risks, imperfections, and unintended consequences. Though these negatives cannot be erased, the alternative—no quality control at all—is far worse. So we’re well advised to acknowledge the problems with available tools and mechanisms and then do our best to monitor, minimize, and combat them.
The first step is to create a relatively uncomplicated vendor-approval process that ensures that minimal fiduciary and academic standards are being met. Providers should have to document to a designated public entity that their books are clean and to report basic metrics for services provided. For those providers that offer certain categories of services—especially the kind that directly impact student achievement—it’s reasonable to have a state review process that features some kind of authorization and renewal.
Second, as providers deliver their wares—and families choose among and students engage with them—it is essential that some entity collect various kinds of data on performance. That’s apt to be a state responsibility but could easily be delegated to any number of third-party monitors. But whether a state agency acts directly or relies on others, a wide array of data needs to be collected, gains measured and analyzed, and findings made public in transparent fashion. Just as important is to gather and disseminate information on consumer satisfaction and expert reviews of programs and providers.
Third, families need to acquire a vested interest in the cost-effectiveness of their new opportunities by being given control over some discrete portion of spending. This step is essential if parents are to approach schooling as more than a unitary service and to start thinking about the quality of particular services, and if education officials are to enjoy the encouragement and support they need to revisit and change deep-seated routines.
All three are needed, in various combinations. But don’t expect perfection. Each possible combination eases some concerns while posing new ones. Hence, given our scant experience with digital provision, it seems prudent to avoid sweeping national policies or requirements, at least for now.
The challenges involved in effecting these shifts are both familiar and new. In a sense, they are essentially the same challenges—to be addressed by the same tools—that educators and policymakers have wrestled with for decades. But in their current incarnation, they can be met only with a degree of granularity, agility, and precision that is new to the world of K–12 schooling.
A formidable task? Surely; because it is one that will ultimately determine whether the advent of digital learning revolutionizes American education or becomes just another layer of slate strapped to the roof of the nineteenth-century schoolhouse.
Click to listen to commentary on Rick's new paper from the Education Gadfly Show podcast |
Enthralling lecture
(Photo by Robert S. Donovan 19)
While traditional ed schools continue to defy efforts at reform and transparency, other innovative teacher-training programs are moving forward. Enter New York-based Relay School of Education as a prime specimen. There are no university campuses or lecture halls for Relay’s students, who spend most of their training in their own classrooms under the guidance of mentors. Degrees aren’t conferred based on GPA or class time. To complete Relay’s two-year program—which encompasses 60 “modules” connected to real-world issues, like pacing and discipline—just demonstrate that your students have made at least one year of academic progress in your chosen subject. A fantastic evolution—but not one that is universally welcomed. Status-quo defenders have already lamented Relay’s alleged de-professionalization of teaching. It’s hard to believe, though, that novice teachers will receive less professional preparation as active participants in real K-12 classrooms than they would get in a distant university setting, half-listening to yet another lecture on Paulo Freire.
“Ed Schools’ Pedagogical Puzzle,” New York Times, July 21, 2011. |
Enacted just two months ago, Tennessee’s new virtual-education measure is receiving much flack from Democrats and Republicans alike. At issue is the $5,387 in per-pupil funding marked for the virtual charters opened under the bill’s auspices. Critics assert that these charters siphon cash from district schools, leaving them bereft of resources. But a word of caution to these critics: Average per-pupil funding in Tennessee is about $7,900, according to the Census Bureau, so sending students to a virtual charter actually saves about 2,500 education dollars each kid. Virtual charters are a smart new way to leverage twenty-first century technology (and save on building and busing costs to boot). This type of innovation may even be more important in economic downturns than it is in booms.
“‘Virtual school’ in Tennessee may drain taxpayer funds,” The Commercial Appeal, July 25, 2011.
Four years ago, Fordham and the Northwest Evaluation Association (NWEA) teamed up to produce “The Proficiency Illusion,” a seminal analysis detailing the gaping discrepancies in proficiency-rate cut scores across states, grades, and subjects. Last month, NWEA released a follow-up, adding nine states to its original analysis in math and eleven states in reading (bringing those totals to thirty-five and thirty-seven, respectively) and extending the analysis through 2010. The new results are just as striking as the old. In grade-eight math, for example, NWEA found a 52-percentile difference between the highest and lowest state cut scores. And individual states continue to make their tests much harder at some grade levels than at others—creating significant problems for AYP determinations, value-added teacher evaluations, and much else. Feel like digging in? Check out the interactive data gallery. Prepare to feel a little ill.
Sarah Durant and Michael Dahlin, “The State of Proficiency: How student proficiency rates vary across states, subjects, and grades between 2002 and 2010,” (Portland, OR: Northwest Evaluation Association, June 2011).
This report—a joint effort by RAND, Vanderbilt, and the National Center on Performance Incentives—drove the final nail into the coffin of New York City’s shaky and pricey School-Wide Performance Bonus Program. We learn from this analysis that Gotham’s foray into school-wide bonuses “did not improve student achievement at any grade level.” In fact, average math and ELA scores for participating elementary and middle schools were lower than those of the control group. (There were no effects on scores at the high school level.) To understand why, analysts queried participating teachers—ninety-two percent of whom said the program didn’t affect the way they did their jobs. That shouldn’t surprise anyone, since the bonuses amounted to only $1,500 after taxes, and were tied to higher test scores school-wide—something over which individual teachers have little control. Further, a third of teachers said they didn’t even understand the criteria for obtaining the bonus. Thorough and informative, this report should act as a warning bell for anyone looking to replicate Gotham’s poorly designed (and now defunct) program.
Julie A. Marsh, Matthew G. Springer, Daniel F. McCaffrey, Kun Yuan, Scott Epstein, Julia Koppich, Nidhi Kalra, Catherine DiMartino, and Art (Xiao) Peng, “A Big Apple for Educators: New York City’s Experiment with Schoolwide Performance Bonuses: Final Evaluation Report,” (Santa Monica, CA: RAND Corporation, 2011). |
This longitudinal study out of NYU examines the connection between “home-learning environments” and school readiness by tracking a representative sample of 1,852 low-income children at ages one, two, three, and five. (The evaluation is based on things like the number of books read to the child and maternal responsiveness to the child’s requests.) There’s much to plumb here, but one takeaway emerges: Almost 70 percent of the low-income children with consistently strong home environments (ten percent of the total group) performed at or above the national averages for students from all socioeconomic backgrounds—demonstrating the home’s gap-closing potential. Unfortunately, none of the learning environments originally diagnosed as low in quality became literacy-rich by the time the children started pre-Kindergarten, implying that some children are already falling behind (and staying behind) after their first year of life. Now if we could only figure out how to help more parents more effectively play the role of their child’s first teacher.
Click to listen to commentary on NYU's study from the Education Gadfly Show podcast |
Eileen T. Rodriguez and Catherine S. Tamis-LeMonda, “Trajectories of the Home Learning Environment Across the First 5 Years: Associations With Children’s Vocabulary and Literacy Skills at Prekindergarten,” (New York, N.Y.: New York University, July/August 2011).
Outsiders have envied, emulated, and damned D.C.’s famous teacher-evaluation system, IMPACT. But what is the insiders’ perspective? This report from Ed Sector delivers the answer. Author Susan Headden, a Pulitzer Prize-winning journalist, presents a thorough and balanced perspective on this revolutionary (but still emergent) system. She explains the core elements of IMPACT (the classroom observations, instructional buckets against which teachers are measured, etc.), and weaves a narrative that effectively captures the experience of (a sample of) observed teachers, “master educators” (the ones conducting the observations), as well as principals, union leaders, and District staff responsible for developing the system. She notes a few red flags (the distribution of IMPACT’s large performance bonuses are concentrated in already high-performing schools, for example) and details a few places where IMPACT could be improved, notably by doing more to help develop educators rather than simply reward or punish them. But progress is being made on that front. Based on our own interviews (below), we found that, overwhelmingly, teachers saw monumental improvements in professional development, and that the new system gave them specific, tangible ways to enhance instruction.
Click to listen to commentary on the D.C. IMPACT-based firings from the Education Gadfly Show podcast |
Susan Headden, “Inside IMPACT: D.C.’s Model Teacher Evaluation System,” (Washington, D.C.: Education Sector, June 2011).