Evolution of the Impact of the DC Opportunity Scholarship Program: Final Report
Patrick Wolf, Babette Gutmann, Michael Puma, Brian Kisida, Lou Rizzo, Nada Eissa, and Matthew CarrInstitute of Education SciencesJune 2010
Patrick Wolf, Babette Gutmann, Michael Puma, Brian Kisida, Lou Rizzo, Nada Eissa, and Matthew CarrInstitute of Education SciencesJune 2010
Patrick Wolf, Babette Gutmann, Michael Puma, Brian Kisida, Lou Rizzo, Nada Eissa, and Matthew Carr
Institute of Education Sciences
June 2010
This is the final report of a four-year evaluation of the not-quite-dead-yet D.C. voucher program. If you recall, year three’s positive results were released last year right around the time the program lost federal funding. (Find year two results here.) Those positive results have largely dissipated—this final report finds that overall reading and math scores were not significantly impacted by participation in the program after at least four years (some students have been in the program longer), though to be fair, the impact falls just short of statistical significance in reading. But a D.C. voucher did significantly improve a student’s chance of graduating: The mere offer of a voucher raised by twelve percentage points the probability of a student graduating and by twenty-one percentage points the odds of those who actually used the voucher (i.e., took it to a private school). Parents of voucher-receiving students were also significantly more satisfied with their private schools and thought the schools much safer than parents of control group students. For a rigorous gold-standard study—the type that usually finds “no effects” for anything—this is not too shabby. If only Congress had paid attention. Read the results here.
Howard S. Bloom, Saskia Levy Thompson, and Rebecca Unterman
MDRC
June 2010
The small-schools movement is a damaged brand, thanks to research showing that “smallness” is not enough when it comes to boosting achievement, especially for disadvantaged pupils. So it would seem that this study by MDRC, which finds positive effects in New York City’s “small schools of choice” (SSCs), is notable for saying otherwise. But, as the authors put it, these schools “are more than just small”—they were created through a rigorous application process, and they had to fulfill other criteria, such as serving traditionally disadvantaged communities. Even more important, however, is that they were created to replace roughly twenty large failing high schools have been closed for chronic low performance since 2002, proving that school closure and opening new schools is possible on the large scale. (Indeed, these schools collectively serve about 45,000 students—roughly the same size as the entire Houston high school population.) MDRC analysts tracked 21,000 NYC students who applied to a ninth-grade SSC lottery between 2005 and 2008; some got into a small school and some did not, thus creating a randomized sample (think lottiered-in, lotteried-out charter study design). The results were strong: SSCs increased the likelihood, year by year, of students being on track to graduate. For example, at the end of the second year at a SSC, students had on average accumulated 22 credits towards graduation, while non-SSC students had just 19. This translated, after four years, into an average 7 percent higher likelihood that a SSC student would graduate on time (in four years) than a non-SSC student. The Gates Foundation (which funded the study as well as a big chunk of NYC’s small schools initiative), and Joel Klein, have both taken lots of flack for their enthusiasm for small schools. This study appears to be at least a partial vindication. Read it here.
Tom Ferrick, Jr. and Laura Horwitz
Philadelphia Research Initiative, Pew Charitable Trust
June 2010
Comparing parental satisfaction in district, charter, and Catholic schools in Philadelphia, this analysis examines the City of Brotherly Love’s K-12 school-choice landscape. The city has seen the same trends as many other urban areas: District and Catholic enrollment is declining, while charter enrollment is booming. How do parents feel about these trends? To find out, researchers polled about 800 parents and conducted focus groups with a subset of them; they also interviewed teachers, city administrators, and others, and visited a handful of schools in person. They found that charter and Catholic schools have higher parental satisfaction rates (95 percent, for both) than their district counterparts (77 percent). And, even more importantly, parents are not “philosophically wedded” to any one type: Parents think about schools individually, rather than as systems, and they don’t care what type it is so long as a safe, caring environment at little to no out-of-pocket cost. District-school frustration ran deep: Indeed, charter or Catholic school parents who had once sent their children to district schools were the most disillusioned about the latter, and even district school parents who were upbeat about their own child’s school were not enthusiastic about the system as a whole. The authors predict that enrollment trends will continue: District and Catholic schools should brace for continuing decline, while charters will expand. And unfortunately, charter schools may force Catholic schools out of the market: Though parents are largely satisfied with their Catholic schools, charters present an attractive no-cost option. Read the full report here.
To oppose “results-based accountability” in education is close to a taboo nowadays, a position so antithetical to the spirit of the age that few dare mention it. Let us, therefore, declare ourselves shocked and saddened that Harvard University, in so many ways a pacesetter in education, is embracing that very position.
Starting in September, courses in Harvard’s Faculty of Arts and Sciences (FAS) will no longer routinely require “final” exams. For most of Harvard’s existence, any professor wishing to forego the practice of final exams required formal approval to do so by the entire faculty. At least since the 1940s, professors were required to submit a form to opt out of giving a final exam. But in fall 2010, professors will need to file a specific request to opt in. Dean of undergraduate education Jay M. Harris is already predicting reducing the academic calendar by a day or two in response to the eased testing burden.
Moreover, general exams—requiring seniors to demonstrate mastery of the fundamental knowledge of their major—are given in fewer and fewer departments. Even Harvard’s new General Education courses will abjure finals. We are left wondering: Without exams to prove it, how can students be sure that they are “generally educated” when they graduate? How can the institution itself be sure? Or doesn’t it care?
Some will say that other student work products—term papers, especially, but increasingly multimedia projects, too—are better gauges of learning than cumulative exams. Associate Dean Stephanie H. Kenen recently stated that “The literature on learning shows that hands-on activities can help some students learn and integrate the material better.”
In reality, however, the decline of testing at Harvard has little to do with any “literature on learning.” When we attended college there, more than four decades apart, some of our most fruitful educational learning experiences occurred in preparing for, and actually taking, final exams. They forced us to sharpen our thoughts and solidify our knowledge, whether it was by connecting the dots between Andy Warhol and Joseph Stalin for Louis Menand in 2006, or making sense of a year’s worth of American social history per Oscar Handlin in 1964. Term papers were essential, too—let us make no mistake. But they were easier to fudge with obscure research, borrowed insights, and artful prose. It was finals that forced us to think, to synthesize, to study, and to learn.
What’s really happening, we sense, is that Harvard is yielding to education’s most primitive temptation: lowering standards and waiving measurements for the sake of convenience. It certainly isn’t the only university to succumb, but given Harvard’s reputation as a trendsetter, we should expect better. Just imagine: Students will be delighted to forego finals and instructors will be thrilled not to have to create or grade them. Everybody finishes the semester earlier. (The last few weeks of class don’t really count when that material won’t be tested!) Yet Harvard’s leaders may eventually have to acknowledge that, with fewer exams and test results, they will know less and less about what students are or are not learning within their hallowed gates.
Not so long ago, Harvard was striding toward transparency and accountability. In 2006, under the leadership of interim President Derek Bok (no slouch himself as an education reformer and critic), the university participated in the Collegiate Learning Assessment (CLA). The CLA is intended to measure the kinds of skills and thinking at the core of an “arts and sciences” curriculum and, by comparing the scores of seniors and freshmen, to gauge a university’s “value-added.” One of us, a senior at the time, even volunteered to participate. It was a rare chance to put the old joke to the test: “Why is Harvard such a great repository of knowledge? Because students enter with so much and leave with so little.”
Sadly, Harvard’s CLA results were never shared with participants, as had been promised, much less with the outside world. The flickering light of results-based accountability at Harvard was thus dimmed—by whom and why, we can only guess.*
Granted, testing is complicated. How to assess a semester’s worth of learning in 180 minutes? How to probe what one has learned during three years as a history major? How, simultaneously, to measure the accumulation of knowledge and the development of analytic skills and effective expression? How to distill course themes into challenging essay questions or problem sets, and how to grade them fairly?
But avoiding tough methodological challenges isn’t in Harvard’s mission statement. In matters of education policy—including many earlier rounds of assessment, such as the SAT and Advanced Placement exams—Harvard has long been a pioneer. Other universities look to it for guidance. Why not with final-exam style assessments, too? Harvard’s Graduate School of Education is full of testing experts and its Psychology Department is stacked with heavyweights. Its mathematicians, computer scientists, and statisticians are competent to crunch the numbers, sample the populations, etc. Couldn’t they help the university develop suitable guidelines, templates, and prototypes for measuring what their students learn? Would it be too much to ask them to actually develop better tests? Maybe even share them with the world?
Harvard might fruitfully take a cue from K-12 education. Here we’re seeing slow but steady progress towards intelligent assessment and fair accountability. The primary-secondary education community is approaching consensus on content standards for math and English language arts. Consortia of states are undertaking the development of “next-generation” assessment systems. The Obama Administration has taken stock of No Child Left Behind and offered a new blueprint for giving schools and districts more flexibility to reach higher performance standards. None of this has been easy, and countless political headaches would have been avoided by simply jettisoning results-based accountability. Plenty of teachers would have been pleased, too. But most K-12 policymakers know better: Were it not for the dreaded tests, we would not be able to learn from our educational successes, nor direct attention to our most persistent failures.
Harvard doubtless assumes that no formal measures of learning are needed to demonstrate its educational value to students. Just peek inside Lamont Library late on a weeknight and behold the heaps of books, index cards, and coffee mugs. Listen to the keyboard clatter of great term papers in the making. Well, we studied in Lamont—one of us quite recently—and we have a secret to share: There is a difference between effort and learning, between putting in the time and coming out with something worthwhile. For every undergraduate writing the next Great American Novel, another student is frustrated, confused, and stressed by ambiguous expectations from instructors. Harvard’s struggles with mediocre instruction would be greatly aided by better tests aligned with clearer expectations—not by giving up on exams altogether.
Harvard is blessed with talented students—it can pick and choose among America’s finest—and that doubtless encourages it to pay scant attention to how much they actually learn during their undergraduate years in Cambridge. University leaders also understand that public accountability can be humbling. Arrogance and pandering are more convenient. They just don’t get us any closer to veritas.
*The authors contacted the office of the President last Friday to corroborate this account of the CLA at Harvard. As of Wednesday, July 14, officials were able to neither confirm nor deny it.
By Chester E. Finn, Jr. and Mickey Muldoon
Finn and Muldoon both graduated from Harvard, classes of 1965 and 2007, respectively.
In this video, Mike Petrilli interviews Finn and Muldoon about the future of higher ed testing and accountability.
Minnesota, birthplace of charter schools, may soon claim another frontier: becoming the first state to allow a teachers' union to be a charter authorizer. Antithetical, you say? One of the hallmarks of most charter schools is their lack of unionization, which allows more flexibility to hire, fire, and assign staff, and to structure the school day differently. Furthermore, one must wonder how a union will cope with shutting down one its own schools if it’s not up to par, staffed as it will be with its own union members. But the Minneapolis Federation of Teachers thinks it can handle these situations. Indeed, it chose to apply for the sponsorship role even though the state last year raised the qualifications for charter school sponsorship, and a bunch of districts and nonprofits gave up authorizership in response. MFT president Lynn Nordgren says the union wants to “get out from under [the] bureaucracies”—the pile up of “programs and rules and systems”—that “weigh down” schooling. Fair enough. Minnesota has a history of “teacher-owned schools”; why not union-owned schools? Looks like a teachable moment to us.
"Mpls. teachers' union wants power to authorize new charter schools," by Tom Weber, Minnesota Public Radio, July 9, 2010
If we were to list the lessons learned from charter schools, it would probably look like this: By breaking down bureaucratic and procedural barriers, these schools have opened the education market to innovators, fresh thinking, and experimentation. But simply unlocking the gates didn’t necessarily produce quality—good rules are different from no rules. That’s basically the thinking behind economist Paul (son of Roy) Romer’s “charter cities” movement, which is, in fact, inspired in part by his father’s work in education and charter schools more generally. Traditional development theory says that freedom and prosperity go hand-in-hand. Romer believes that smart business regulation and a business-friendly culture are what really attract investment, instead of just freeing a Third World economy from, for example, authoritarian control. Thus his plan: If you give a swath of land in a developing country to an “enlightened” (i.e., First World) government, which in turn establishes smart business regulation and a culture of investment, not only will investors and ideas flock to the site, but the areas around the it—and eventually, even, the whole country—will be inspired to replicate its model governance structure. You might call it “neo-colonial” development. Romer points to Hong Kong, whose British business climate inspired economic reform on the mainland, as his model. We’d take his analogy one step further: The best charter schools, as we learned from David Whitman, are neo-paternalistic, seeking to immerse poor youngsters in a middle-class, achievement-oriented milieu. Culture matters, too, whether in schooling or economic development.
“The Politically Incorrect Guide to Ending Poverty,” by Sebastian Mallaby, The Atlantic, July/August 2010
Howard S. Bloom, Saskia Levy Thompson, and Rebecca Unterman
MDRC
June 2010
The small-schools movement is a damaged brand, thanks to research showing that “smallness” is not enough when it comes to boosting achievement, especially for disadvantaged pupils. So it would seem that this study by MDRC, which finds positive effects in New York City’s “small schools of choice” (SSCs), is notable for saying otherwise. But, as the authors put it, these schools “are more than just small”—they were created through a rigorous application process, and they had to fulfill other criteria, such as serving traditionally disadvantaged communities. Even more important, however, is that they were created to replace roughly twenty large failing high schools have been closed for chronic low performance since 2002, proving that school closure and opening new schools is possible on the large scale. (Indeed, these schools collectively serve about 45,000 students—roughly the same size as the entire Houston high school population.) MDRC analysts tracked 21,000 NYC students who applied to a ninth-grade SSC lottery between 2005 and 2008; some got into a small school and some did not, thus creating a randomized sample (think lottiered-in, lotteried-out charter study design). The results were strong: SSCs increased the likelihood, year by year, of students being on track to graduate. For example, at the end of the second year at a SSC, students had on average accumulated 22 credits towards graduation, while non-SSC students had just 19. This translated, after four years, into an average 7 percent higher likelihood that a SSC student would graduate on time (in four years) than a non-SSC student. The Gates Foundation (which funded the study as well as a big chunk of NYC’s small schools initiative), and Joel Klein, have both taken lots of flack for their enthusiasm for small schools. This study appears to be at least a partial vindication. Read it here.
Patrick Wolf, Babette Gutmann, Michael Puma, Brian Kisida, Lou Rizzo, Nada Eissa, and Matthew Carr
Institute of Education Sciences
June 2010
This is the final report of a four-year evaluation of the not-quite-dead-yet D.C. voucher program. If you recall, year three’s positive results were released last year right around the time the program lost federal funding. (Find year two results here.) Those positive results have largely dissipated—this final report finds that overall reading and math scores were not significantly impacted by participation in the program after at least four years (some students have been in the program longer), though to be fair, the impact falls just short of statistical significance in reading. But a D.C. voucher did significantly improve a student’s chance of graduating: The mere offer of a voucher raised by twelve percentage points the probability of a student graduating and by twenty-one percentage points the odds of those who actually used the voucher (i.e., took it to a private school). Parents of voucher-receiving students were also significantly more satisfied with their private schools and thought the schools much safer than parents of control group students. For a rigorous gold-standard study—the type that usually finds “no effects” for anything—this is not too shabby. If only Congress had paid attention. Read the results here.
Tom Ferrick, Jr. and Laura Horwitz
Philadelphia Research Initiative, Pew Charitable Trust
June 2010
Comparing parental satisfaction in district, charter, and Catholic schools in Philadelphia, this analysis examines the City of Brotherly Love’s K-12 school-choice landscape. The city has seen the same trends as many other urban areas: District and Catholic enrollment is declining, while charter enrollment is booming. How do parents feel about these trends? To find out, researchers polled about 800 parents and conducted focus groups with a subset of them; they also interviewed teachers, city administrators, and others, and visited a handful of schools in person. They found that charter and Catholic schools have higher parental satisfaction rates (95 percent, for both) than their district counterparts (77 percent). And, even more importantly, parents are not “philosophically wedded” to any one type: Parents think about schools individually, rather than as systems, and they don’t care what type it is so long as a safe, caring environment at little to no out-of-pocket cost. District-school frustration ran deep: Indeed, charter or Catholic school parents who had once sent their children to district schools were the most disillusioned about the latter, and even district school parents who were upbeat about their own child’s school were not enthusiastic about the system as a whole. The authors predict that enrollment trends will continue: District and Catholic schools should brace for continuing decline, while charters will expand. And unfortunately, charter schools may force Catholic schools out of the market: Though parents are largely satisfied with their Catholic schools, charters present an attractive no-cost option. Read the full report here.