A Timely IDEA: Rethinking Federal Education Programs for Children with Disabilities
Center on Education PolicyJanuary 2002
Center on Education PolicyJanuary 2002
Center on Education Policy
January 2002
This 44-pager from Jack Jennings's Center on Education Policy contains three papers examining the federal special ed program and recommending that Congress make changes in it. While the three authors (Thomas Hehir, Lawrence Gloeckler, Margaret McLaughlin) don't entirely agree on how extensive these changes should be, they all point in the same direction: more focus on academic results, less burdensome paperwork, a more rational (and generous) funding system. We immodestly note that this is awfully similar to the policy territory covered-and conclusions reached-nine months ago in the joint Fordham-Progressive Policy Institute special ed volume, which you can find at http://www.edexcellence.net/doc/special_ed_final.pdf. To get this one, surf to http://www.cep-dc.org/specialeducation/timelyidea2002.htm.
C. Emily Feistritzer and David T. Chester, National Center for Education Information
2002
Since 1983, Emily Feistritzer and the National Center for Education Information (NCEI) have tracked state efforts to create alternative certification programs for people interested in becoming teachers without having to go back to ed school. Since 1990, this tracking effort has included a periodic state-by-state guide to these programs. In all, 45 states now have some type of alternative certification program for teachers. The most striking bit of news in this year's volume is the degree to which they're converging on what an alternative teacher certification program looks like. In the past three years, 20 states have created some 34 new alternative teacher certification programs that share these features: 1) they are designed for candidates who already possess a bachelor's degree; 2) they include a rigorous screening process comprised of tests, interviews and content mastery; 3) they are field-based; 4) they include professional education training before and during teaching; 5) they provide mentors for all new teachers; and 6) they have high performance standards. Compared with the graduates of traditional programs, the "alternative" recruits are more likely to be minority group members or men. They also tend to teach high-demand subjects (like math and science) and to have higher retention rates despite being concentrated in more challenging locales like the inner cities or isolated rural areas. For prospective teachers, this guide includes contacts for and profiles of all alternative programs operating in every state. Policy wonks will be interested in the classification system NCEI developed to categorize the different types of programs states label as "alternative." Copies of the hefty 432-page report can be ordered for $99 plus $10 shipping and handling from NCEI at 4401-A Connecticut Ave, NW, PMB 212, Washington, DC 20008; phone 202-362-3444; fax 202-362-3493; http://www.ncei.com. It's worth overcoming the sticker shock to get your own copy of this valuable resource-people are always asking to borrow ours.
Simeon Slovacek, Antony Kunnan and Hae-Jin Kim, Program Evaluation and Research Collaborative, Charter College of Education, California State University at Los Angeles
March 11, 2002
In case you didn't know, California State University at Los Angeles has a "charter college of education," two of whose faculty members (Simeon Slovacek and Antony Kunnan), together with doctoral student Hae-Jin Kim, recently issued this brief but bullish appraisal of the performance of 42 California charter schools in boosting the achievement levels of disadvantaged children over a two-year period, compared with non-charter schools serving similar kids. The bottom line: charters are producing greater learning gains for these children, though their average scores remain slightly lower than those of the non-charters. Two additional points bear mentioning: (1) The poorer the students, the greater the charter-school edge in achievement gains; and (2) the smaller the charter school, the wider the edge. The statistical analyses are complex and you may want to see for yourself. You can download a PDF copy at http://www.calstatela.edu/academic/ccoe/c_perc/rpt1.pdf.
Jolley Bruce Christman, Consortium for Policy Research in Education
December 2001
n early 1995, the School Board of Philadelphia adopted a systemic reform plan called Children Achieving to improve the city's troubled public schools. "Powerful Ideas, Modest Gains" is one of several reports issued by the Consortium for Policy Research in Education (CPRE) at the University of Pennsylvania that evaluates the successes and failures of the city's reform effort. The core beliefs driving the reform program were: 1) results matter, 2) all students can achieve at high levels, and 3) low expectations of students breed persistent underachievement. Children Achieving sought change through content standards (the knowledge and skills all students were to know); an accountability system based on annual student assessments; and decentralization (smaller schools and classes). Philadelphia's school board and then-Superintendent David Hornbeck, backed by a five-year $50 million Annenberg Challenge grant (matched by $100 million in city funds), aimed to demonstrate through a comprehensive one-size-fits-all reform that every student could achieve proficiency in mathematics, reading, and science by 2008. "Powerful Ideas, Modest Gains" provides a mid-term review of the impact of those reforms on the city's middle schools. As the title suggests, powerful ideas and political forces have driven the reform effort, but the results have thus far fallen short: "Reforms produced modest gains for middle grades students in reading and science and made limited headway in addressing the abysmally low achievement of students in mathematics." As things currently stand, it seems doubtful that a majority of students will achieve proficiency in the three core subjects by 2008. There have been a number of obstacles to effective implementation of Philadelphia's reform plan, but a few deserve special mention. Despite the fact that a large percentage of the teachers, at least at the middle school level, bought into the reform agenda, there was a persistent feeling that discipline problems interfered with the effort to boost student performance. Even with smaller classes and schools, teachers still complained they spent an inordinate amount of time dealing with disruptive students. The reform efforts at the middle-school level were also stymied by a high teacher turnover rate and an almost pathological focus on teaching test-taking skills rather than subject matter. Finally, the apparent advantages of small learning communities were not used to improve student learning, but were largely seen as a way to address the social needs of students. For anyone who supports, or is working to help make successful, the No Child Left Behind Act, or to create small schools, the Philadelphia experience should be studied closely. To access a PDF version of this report, surf to http://www.cpre.org/Publications/children05.pdf. For related reports on Children Achieving, see http://www.cpre.org/Publications/Publications_Research.htm. For earlier Gadfly reviews of Children Achieving reports, see http://www.edexcellence.net/gadfly/issue.cfm?issue=89#1335.
Jie Chen and Thomas Ferguson, University of Massachusetts
February 20, 2002
Speaking of intricate analyses, University of Massachusetts political scientist Thomas Ferguson and statistician Jie Chen recently unveiled this hundred-page look at the performance of Bay State school districts on the state's much-discussed Massachusetts Comprehensive Assessment System (MCAS). It's caused something of a stir because of its assertion that some of the state's wealthier and more prominent school districts haven't lived up to expectations on MCAS, and its claim that "disadvantaged districts are progressing at rates which are not systematically different from those of richer districts." We're skeptical. Setting aside the senior author's association with such publications as The Nation and Mother Jones, this analysis seems to come from the kitchen-sink school of social science. Using various econometric techniques to try to isolate the effects of a bunch of different variables on districts' MCAS scores, it makes such odd assertions as that "athletic budgets have substantial impacts on district test scores" and "districts with competitive Senate races...have higher MCAS scores." We found ourselves wondering why they hadn't looked to see if the superintendent is left-handed or if the dogcatcher election also bears on MCAS scores. Do sunspots affect them? Ambient air quality? The incidence of tattoos on the school nurse? It looks as if the authors assembled whatever they could get their hands on by way of district-level data and played around to see what might show a relationship to test scores. Note, too, that the latest year of test scores they examined was 2000, before the impressive gains that many Massachusetts districts (and students) racked up in 2001. In fact, the score changes between 1998 and 2000 were quite small. In top-scoring Harvard (the town, not the university), the district average went from 245.47 to 246.37. In middle-scoring Plymouth, it went from 233.72 to 234.95. And in bottom-scoring Holyoke it declined fractionally from 217.49 to 217.32. It seems to us that only a headline-seeking social scientist with little else to do would expend this much effort trying to account for those wee changes. If you want to see for yourself, a PDF version of the report can be found at http://www.mccormack.umb.edu/Publications/docs/MCAS022602.pdf.
edited by MaryAnn Byrnes
2002
This compilation volume boasting 18 "debates" on special education issues is worth knowing about for those who find it easiest to get their minds around complex topics by reading opposing views on those topics. Editor MaryAnn Byrnes of UMass/Boston did a nice job of selecting issues (under three broad headings: "special education and society," "inclusion" and "issues about disabilities") and for the most part she did well at picking cogent expositors of rival views on those issues. Also worthwhile is her ten-page introduction sketching how U.S. special-ed policy came to be the way it is. But you'll find no general conclusions or recommendations. This is a pro-con "issues" reader, most likely meant to be assigned in classes preparing future special educators. Almost 400 pages long, it's published by McGraw-Hill/Dushkin. The ISBN is 0072480564. You can also find it on the web at http://www.dushkin.com/text-data/catalog/0072480564.mhtml.
A year ago, a Vanderbilt-based research team submitted to the federal Education Department a study titled "The Study of Special Education Leadership Personnel With Particular Attention to the Professoriate." (You can request a copy by emailing lead author Deborah Deutsch Smith at [email protected].) In 50 pages (plus appendices), it concluded that there are not enough people training for university posts in special education and that this is bad for disabled kids and prospective special educators. Though it listed some fairly predictable solutions, it mainly identified problems. More recently, Vanderbilt published a dozen-page glossy report, drawing on the aforementioned study and others, entitled "The Shortage of Special Education Faculty: Why It is Happening, Why It Matters, and What We Can Do About It." We are always slightly wary of reports that misspell the word "acknowledgments" but, if you are interested, this one briefly reviews the main reasons that (in the authors' view) there aren't enough future special ed faculty in the pipeline (time, ability to relocate, money, career plans) and some strategies to boost this supply. These add up to: make it faster, easier and more appealing to become a special-ed professor. The report itself doesn't appear in cyberspace but you can find a summary at http://hecse.uky.edu/articles/shortage.html.
In this space two weeks ago (see http://www.edexcellence.net/gadfly/issue.cfm?issue=66#983), I reported that Congressman Michael Castle's (R-Delaware) bill to remake the federal education research enterprise had much merit but also posed some problems, especially regarding the future of the National Assessment of Educational Progress (NAEP) and its governing board (NAGB).
In two rounds of mark-ups, it's nice to be able to note, the House Education Committee resolved most of those problems (and made other worthy improvements in the bill), thanks mainly to the direct engagement and savvy of Representatives Castle and Dale Kildee (D-Michigan) and their sleepless staffers. Castle and Kildee understand the importance of the "nation's report card" and the care that must go into any alteration of its constitutional arrangements. So does Representative Johnny Isakson (R-Georgia), who offered the key amendments to the Committee. Hurrah for them!
Their efforts reflect a bipartisan consensus that was signaled at a spring 2000 Congressional hearing where one of the (few) things Republicans and Democrats agreed upon was that the National Assessment must be kept as independent as Congress can manage. That was true then. It's even truer today, considering the many new mandates that the No Child Left Behind act laid upon NAEP.
But the H.R. 3801 repair squad has been laboring under two constraints: some House colleagues who are mistrustful of NAEP ("the camel's nose of national testing") and some Education Department officials who don't want NAGB truly to be in charge of NAEP because that would mean they're not.
In that unhappy environment, the best that Messrs. Castle and Kildee felt they could do was to strive to preserve the "status quo ante," which means retaining a split jurisdiction wherein the "independent" governing board makes some NAEP decisions and the National Center for Education Statistics (NCES)-currently part of OERI-makes other decisions. This has not worked any too smoothly over the years and ought to be fixed. But politics seem to preclude a proper reconstruction. Hence the quest to maintain the status quo, even as H.R. 3801 puts OERI through a wholesale reorganization into the new Academy of Education Sciences.
The problem with trying to hang onto the status quo is that the Academy represents a profound alteration of the status quo for NCES, which becomes far less independent than it is today and subject to the dictates of both the highly-independent Academy director and a new Education Sciences Board with wide-ranging powers of its own. How to preserve the current "balance of power" between NCES and NAGB as powerful new players enter the NCES orbit? It's impossible. It's chimerical. A million unknowns arise.
Thus the Education Committee faced a true puzzle. In marking-up H.R. 3801, it assembled many of the pieces, exempting NAEP from certain parts of the new Academy and NAGB from other parts. The present bill might even yield a workable arrangement IF NAGB and the Academy director (and Education Sciences Board) forever see eye to eye. But when they disagree, as must eventually happen, even this much-improved bill is a formula for deadlock and confusion. That's not a sound basis for a constitutional arrangement. It's way too dependent on personalities and na??ve about what can go wrong.
Example: Under current law, NAGB develops policy for the preparation and content of NAEP reports and formulates plans for their release. Under H.R. 3801 (as amended), however, the Academy director is charged with devising peer-review procedures for all Academy reports, presumably including NAEP's. What happens when there's a clash between NAGB's view of how to present NAEP reading results, say, and that of unknown "peer reviewers" selected by someone else according to who knows what criteria? Prediction: NAEP reports will become even slower and more complex than they are today. Picture the 2004 fourth-grade reading results finally lumbering into public view in 2007 when that cohort of students is completing seventh grade. And being unintelligible to the general reader after having been mauled by 57 "experts."
Example: Under current law, NAGB designs the NAEP testing and sampling methodologies and develops processes for their ongoing review and revision-sort of a continuous-improvement model. Under H.R. 3801, however, the Academy director is supposed to ensure that everything done by the Academy is "consistent with standards" as defined by him and his new board. What if NAGB's decisions about testing methods and the Director's notion of testing standards are not compatible? Consider, for instance, the interesting suggestion by Richard Rothstein in yesterday's New York Times that NAEP's long-term trend test should be merged with its more contemporary test into a single measure akin to the Consumer Price Index. That would be a very big change. It might or might not be a good one. But whose decision is it? Under H.R.3801, nobody has clear authority over key policies. It's hard to believe the Congress wants it that way.
I could go on. Many uncertainties present themselves, even after Messrs. Castle, Kildee and Isakson strove to preserve the status quo. The fact is that they've embarked on an impossible task with unknowable, hence problematic consequences. It's akin to brokering a permanent Middle East settlement between two countries, one of which remains unchanged while the other acquires a new government whose leaders are unknown.
Maybe the House floor will bring more amendments. Perhaps a third cheer will then be warranted for this basically sound bill. Otherwise, those who want to assure the integrity of NAEP must look to the Senate to set this important matter right. The Senate is where NAEP's present governance structure was crafted in 1988. Perhaps that chamber will prove better able to preserve it-or even improve it.
"National Test Is Out of Tune With Times," by Richard Rothstein, The New York Times, March 27, 2002
No matter how much pre-service training they have been armed with, new teachers begin their first assignments with a range of urgent, school-specific questions about curriculum, instruction, and classroom management. Yet few schools offer induction programs that give new teachers the kind or level of support they require. This serious mismatch between what new teachers need and what they get is described vividly in an article in Educational Leadership that is based on a study of new teachers in Massachusetts by Harvard's Susan Moore Johnson and her team at the Project on the Next Generation of Teachers. Many new teachers find themselves in schools staffed by veteran teachers with well-established, independent patterns of work who do little to acquaint the neophytes with expert practice. New teachers in these schools are driven to eavesdrop on lunchroom conversations and peek through classroom doors for some clue about what should be going on in their own classroom. Other new teachers end up in schools in urban settings that are staffed primarily by other new teachers who have plenty of energy and commitment but can offer little professional guidance about how to teach effectively. A lucky few novices end up in schools where they receive real support from veteran teachers who have time to observe, offer advice, and help on short notice when things go awry. (Veterans get something out of the relationship too, Johnson notes; new teachers are often able to help older teachers with technology and interpreting data from standards-based reform.) What is clear is that what new teachers most need cannot be supplied by conventional in-service training, with its intermittent after-school sessions on a range of topics, or by periodic visits of the district's curriculum coordinators. Until schools are structured so that new teachers can get the help they need when they need it, the quality of teaching will suffer and attrition rates for new teachers will be high.
It's not only new teachers who need this curriculum-specific, classroom-based support, says James Stigler of UCLA in the same issue of Educational Leadership. Today, most professional development is generic; the people who provide it have created programs aimed at all teachers, regardless of the curriculum they are using, but these programs don't seem to do teachers or students much good. What teachers really need, Stigler says, is: a) to learn how to analyze their own teaching, b) to be exposed to alternatives that have worked for others, and c) to develop the ability to judge when to employ which method. This requires teachers to give up the notion that how they teach in their classroom is their own business, as well as the belief that a teacher is not a professional if what she does is "standard practice" rather than something creative she invented herself. As Albert Shanker used to say, what defines a profession is its standard practice. Stigler criticizes most existing efforts to boost teacher quality for focusing on improving the applicant pool rather improving the methods that teachers use. Most students are taught by an average teacher, implementing the average method, he says, and if we can find a way to make that average method a little bit better, it would have a big effect.
"Keeping New Teachers in Mind," by Susan Moore Johnson and Susan Kardos, Educational Leadership, March 2002
"Creating a Knowledge Base: A Conversation with James Stigler," by Scott Willis, Educational Leadership, March 2002
Last August in The Gadfly (see http://www.edexcellence.net/gadfly/issue.cfm?issue=93#1209), I reviewed the results of the New York City summer school program for 2001, the second time that the giant school system had attempted to corral more than 300,000 kids to return during the hot months for remediation or enrichment or Regents test preparation.
Of that large number, some 72,000 children in grades 3-8 had been ordered to go to summer school because of their academic deficiencies. Of that group, 8,000 did not show up. The summer remedial program was part of the Board of Education's attempt to end social promotion by directing help to those children who had fallen behind in math and reading. The results, as reported by the Board of Education at summer's end, were discouraging. Most who attended summer school failed their end-of-course exams in reading and math, but were promoted anyway. Two-thirds showed little or no improvement in math, and nearly 60 percent failed to improve in reading.
Average reading scores actually dropped for eighth graders, both in 2000 and again in 2001. Further, nearly three-quarters of the eighth graders scored in the lowest level of performance in reading and math after their summer of remediation.
After reviewing these dismal statistics, I recommended that the Board of Education try to learn something from the summer program about "what works" and what doesn't. It seemed to me that the school system ought to be able to use upcoming studies to figure out which methods were most effective, which curricula were most effective, and what kinds of teachers were most effective, especially with the lowest-performing children. It also seemed to be a good chance to learn about the effectiveness of private vendors.
Well, as it happened, the Board of Education had indeed contracted out an evaluation that aimed to answer some of these questions. That's now been done. But the evaluation, conducted by Metis Associates along with the Institute for Education and Social Policy at New York University, raises more questions than it answers. It begins by telling the reader that 374,411 students from kindergarten through high school "registered" for summer school, and "269,620 (72%) actually attended at least one day." For the balance of the study, the reader is asked to think of students who attended for as little as a single day as participants in the summer program. It would have been truly useful to know whether test results were better for those who attended almost every day, as compared to those who attended for only one or two or three days, and whether the students' amount of learning is related to their "time on task," but no such comparison is made.
The report shows that 16,000 teachers and 900 administrators worked in the program, but since we don't know how many children attended regularly, we know nothing about class size and whether it had any relationship to student achievement. The study should have been designed to shed some light on that important issue.
The report tells us that, of thirty-three community school districts and seven high school superintendencies, only 12 of the 40 units actually had clear goals and objectives for the summer programs they administered.
The report finds that administrators and teachers, on the whole, were very satisfied with the program. But in light of the results released last summer by the Board of Education, with so many children failing to make discernible academic progress, this satisfaction seems unwarranted, to say the least. The best measure of the program's effectiveness must be whether children learned, not whether the providers were satisfied with their efforts.
We also learn that the summer schools were allowed to use the curriculum of their choice, and many different ones were in use. Nothing is said about whether some curricula were more effective than others in helping kids master math and/or reading.
We learn that some of the classes were taught with curricula supplied by private vendors, but an effort to compare the effectiveness of the private vendors was "inconclusive."
We learn nothing about the effectiveness of methods employed in different classes, nothing about whether teachers stressed phonics or whole language, this math program or that one, and whether some pedagogies functioned better than others or made no difference.
One issue that the report did question was whether teacher certification bore any relationship to student achievement; the answer is that it did not. The report does not mention that the state and city have just issued mandates to exclude uncertified teachers from the public school classrooms of New York City beginning in 2003, which will exacerbate current shortages and probably be impossible to implement.
The report recommends that the summer program be continued and also that more evaluations be funded.
With the current budgetary pressures on the public schools, the summer program demands a far more discerning review. To the observant reader, even this friendly evaluation shows that the program has been run with minimal planning, with few goals, with inadequate results for students, with scant evidence of "what works," with no demonstrably effective curricula, but with enormous satisfaction on the part of providers. What's wrong with this picture?
"Summer School Draws More Critics," by Abby Goodnough, The New York Times, March 27, 2002
On Monday, opponents of school choice from Arizona State University released a report attacking the state's 1997 education tax credit law, which grants taxpayers a dollar-for-dollar credit against their state tax obligation for donations they make either to public schools or to "school tuition organizations" that award scholarships for use at private schools. Author Glen Wilson calculates that 81% of donations to the school tuition organizations went to help parents pay tuition for children already attending private schools, leaving only 19% to help students switch from public to private schools. He concludes that the tax credit is draining a lot of money from public coffers without achieving its stated aim of giving low-income students the opportunity to attend private schools. The Goldwater Institute quickly responded with a press release noting that, while many scholarships are granted to children who are already enrolled in private schools, an estimated 80 percent of the scholarships are awarded on the basis of financial need, so they are supporting low-income families who may be struggling to make tuition payments at their child's private school. In addition, Goldwater points out that Wilson's estimation of the fiscal impact of tax credits does not take into account the savings accrued by the state when a pupil leaves a public school for a private school. The next day, the Goldwater Institute released its own report by executive director Darcy Olsen, arguing for an expansion of Arizona's tax credit to include businesses that make donations to school tuition organizations that assist only low-income families. A paper by Lisa Snell of the Reason Foundation weighs whether school choice supporters are better off creating a broad tax credit that benefits both lower- and middle-income families or a narrow tax credit that benefits only poor families.
"The Equity Impact of Arizona's Education Tax Credit Program: A Review of the First Three Years," by Glen Wilson, Education Policy Studies Laboratory at Arizona State University, March 2002
"Education Scholarships: Expanding Opportunities for Students, Saving Taxpayers Money," by Darcy Olsen, Goldwater Institute, March 26, 2002
"The Arizona Tax Credit Paradox," by Lisa Snell, Reason Foundation. For a copy of the paper, e-mail [email protected].
A year ago, a Vanderbilt-based research team submitted to the federal Education Department a study titled "The Study of Special Education Leadership Personnel With Particular Attention to the Professoriate." (You can request a copy by emailing lead author Deborah Deutsch Smith at [email protected].) In 50 pages (plus appendices), it concluded that there are not enough people training for university posts in special education and that this is bad for disabled kids and prospective special educators. Though it listed some fairly predictable solutions, it mainly identified problems. More recently, Vanderbilt published a dozen-page glossy report, drawing on the aforementioned study and others, entitled "The Shortage of Special Education Faculty: Why It is Happening, Why It Matters, and What We Can Do About It." We are always slightly wary of reports that misspell the word "acknowledgments" but, if you are interested, this one briefly reviews the main reasons that (in the authors' view) there aren't enough future special ed faculty in the pipeline (time, ability to relocate, money, career plans) and some strategies to boost this supply. These add up to: make it faster, easier and more appealing to become a special-ed professor. The report itself doesn't appear in cyberspace but you can find a summary at http://hecse.uky.edu/articles/shortage.html.
C. Emily Feistritzer and David T. Chester, National Center for Education Information
2002
Since 1983, Emily Feistritzer and the National Center for Education Information (NCEI) have tracked state efforts to create alternative certification programs for people interested in becoming teachers without having to go back to ed school. Since 1990, this tracking effort has included a periodic state-by-state guide to these programs. In all, 45 states now have some type of alternative certification program for teachers. The most striking bit of news in this year's volume is the degree to which they're converging on what an alternative teacher certification program looks like. In the past three years, 20 states have created some 34 new alternative teacher certification programs that share these features: 1) they are designed for candidates who already possess a bachelor's degree; 2) they include a rigorous screening process comprised of tests, interviews and content mastery; 3) they are field-based; 4) they include professional education training before and during teaching; 5) they provide mentors for all new teachers; and 6) they have high performance standards. Compared with the graduates of traditional programs, the "alternative" recruits are more likely to be minority group members or men. They also tend to teach high-demand subjects (like math and science) and to have higher retention rates despite being concentrated in more challenging locales like the inner cities or isolated rural areas. For prospective teachers, this guide includes contacts for and profiles of all alternative programs operating in every state. Policy wonks will be interested in the classification system NCEI developed to categorize the different types of programs states label as "alternative." Copies of the hefty 432-page report can be ordered for $99 plus $10 shipping and handling from NCEI at 4401-A Connecticut Ave, NW, PMB 212, Washington, DC 20008; phone 202-362-3444; fax 202-362-3493; http://www.ncei.com. It's worth overcoming the sticker shock to get your own copy of this valuable resource-people are always asking to borrow ours.
Center on Education Policy
January 2002
This 44-pager from Jack Jennings's Center on Education Policy contains three papers examining the federal special ed program and recommending that Congress make changes in it. While the three authors (Thomas Hehir, Lawrence Gloeckler, Margaret McLaughlin) don't entirely agree on how extensive these changes should be, they all point in the same direction: more focus on academic results, less burdensome paperwork, a more rational (and generous) funding system. We immodestly note that this is awfully similar to the policy territory covered-and conclusions reached-nine months ago in the joint Fordham-Progressive Policy Institute special ed volume, which you can find at http://www.edexcellence.net/doc/special_ed_final.pdf. To get this one, surf to http://www.cep-dc.org/specialeducation/timelyidea2002.htm.
edited by MaryAnn Byrnes
2002
This compilation volume boasting 18 "debates" on special education issues is worth knowing about for those who find it easiest to get their minds around complex topics by reading opposing views on those topics. Editor MaryAnn Byrnes of UMass/Boston did a nice job of selecting issues (under three broad headings: "special education and society," "inclusion" and "issues about disabilities") and for the most part she did well at picking cogent expositors of rival views on those issues. Also worthwhile is her ten-page introduction sketching how U.S. special-ed policy came to be the way it is. But you'll find no general conclusions or recommendations. This is a pro-con "issues" reader, most likely meant to be assigned in classes preparing future special educators. Almost 400 pages long, it's published by McGraw-Hill/Dushkin. The ISBN is 0072480564. You can also find it on the web at http://www.dushkin.com/text-data/catalog/0072480564.mhtml.
Jie Chen and Thomas Ferguson, University of Massachusetts
February 20, 2002
Speaking of intricate analyses, University of Massachusetts political scientist Thomas Ferguson and statistician Jie Chen recently unveiled this hundred-page look at the performance of Bay State school districts on the state's much-discussed Massachusetts Comprehensive Assessment System (MCAS). It's caused something of a stir because of its assertion that some of the state's wealthier and more prominent school districts haven't lived up to expectations on MCAS, and its claim that "disadvantaged districts are progressing at rates which are not systematically different from those of richer districts." We're skeptical. Setting aside the senior author's association with such publications as The Nation and Mother Jones, this analysis seems to come from the kitchen-sink school of social science. Using various econometric techniques to try to isolate the effects of a bunch of different variables on districts' MCAS scores, it makes such odd assertions as that "athletic budgets have substantial impacts on district test scores" and "districts with competitive Senate races...have higher MCAS scores." We found ourselves wondering why they hadn't looked to see if the superintendent is left-handed or if the dogcatcher election also bears on MCAS scores. Do sunspots affect them? Ambient air quality? The incidence of tattoos on the school nurse? It looks as if the authors assembled whatever they could get their hands on by way of district-level data and played around to see what might show a relationship to test scores. Note, too, that the latest year of test scores they examined was 2000, before the impressive gains that many Massachusetts districts (and students) racked up in 2001. In fact, the score changes between 1998 and 2000 were quite small. In top-scoring Harvard (the town, not the university), the district average went from 245.47 to 246.37. In middle-scoring Plymouth, it went from 233.72 to 234.95. And in bottom-scoring Holyoke it declined fractionally from 217.49 to 217.32. It seems to us that only a headline-seeking social scientist with little else to do would expend this much effort trying to account for those wee changes. If you want to see for yourself, a PDF version of the report can be found at http://www.mccormack.umb.edu/Publications/docs/MCAS022602.pdf.
Jolley Bruce Christman, Consortium for Policy Research in Education
December 2001
n early 1995, the School Board of Philadelphia adopted a systemic reform plan called Children Achieving to improve the city's troubled public schools. "Powerful Ideas, Modest Gains" is one of several reports issued by the Consortium for Policy Research in Education (CPRE) at the University of Pennsylvania that evaluates the successes and failures of the city's reform effort. The core beliefs driving the reform program were: 1) results matter, 2) all students can achieve at high levels, and 3) low expectations of students breed persistent underachievement. Children Achieving sought change through content standards (the knowledge and skills all students were to know); an accountability system based on annual student assessments; and decentralization (smaller schools and classes). Philadelphia's school board and then-Superintendent David Hornbeck, backed by a five-year $50 million Annenberg Challenge grant (matched by $100 million in city funds), aimed to demonstrate through a comprehensive one-size-fits-all reform that every student could achieve proficiency in mathematics, reading, and science by 2008. "Powerful Ideas, Modest Gains" provides a mid-term review of the impact of those reforms on the city's middle schools. As the title suggests, powerful ideas and political forces have driven the reform effort, but the results have thus far fallen short: "Reforms produced modest gains for middle grades students in reading and science and made limited headway in addressing the abysmally low achievement of students in mathematics." As things currently stand, it seems doubtful that a majority of students will achieve proficiency in the three core subjects by 2008. There have been a number of obstacles to effective implementation of Philadelphia's reform plan, but a few deserve special mention. Despite the fact that a large percentage of the teachers, at least at the middle school level, bought into the reform agenda, there was a persistent feeling that discipline problems interfered with the effort to boost student performance. Even with smaller classes and schools, teachers still complained they spent an inordinate amount of time dealing with disruptive students. The reform efforts at the middle-school level were also stymied by a high teacher turnover rate and an almost pathological focus on teaching test-taking skills rather than subject matter. Finally, the apparent advantages of small learning communities were not used to improve student learning, but were largely seen as a way to address the social needs of students. For anyone who supports, or is working to help make successful, the No Child Left Behind Act, or to create small schools, the Philadelphia experience should be studied closely. To access a PDF version of this report, surf to http://www.cpre.org/Publications/children05.pdf. For related reports on Children Achieving, see http://www.cpre.org/Publications/Publications_Research.htm. For earlier Gadfly reviews of Children Achieving reports, see http://www.edexcellence.net/gadfly/issue.cfm?issue=89#1335.
Simeon Slovacek, Antony Kunnan and Hae-Jin Kim, Program Evaluation and Research Collaborative, Charter College of Education, California State University at Los Angeles
March 11, 2002
In case you didn't know, California State University at Los Angeles has a "charter college of education," two of whose faculty members (Simeon Slovacek and Antony Kunnan), together with doctoral student Hae-Jin Kim, recently issued this brief but bullish appraisal of the performance of 42 California charter schools in boosting the achievement levels of disadvantaged children over a two-year period, compared with non-charter schools serving similar kids. The bottom line: charters are producing greater learning gains for these children, though their average scores remain slightly lower than those of the non-charters. Two additional points bear mentioning: (1) The poorer the students, the greater the charter-school edge in achievement gains; and (2) the smaller the charter school, the wider the edge. The statistical analyses are complex and you may want to see for yourself. You can download a PDF copy at http://www.calstatela.edu/academic/ccoe/c_perc/rpt1.pdf.