Weighted Student Formula Yearbook
Lisa SnellReason Foundation2009
Lisa SnellReason Foundation2009
Lisa Snell
Reason Foundation
2009
This hefty publication from the Reason Foundation makes an important contribution to our understanding of weighted student formulas (or WSF, a.k.a. weighted student funding or student-based budgeting). Under WSF, generally speaking, education dollars follow students via formulae that take into account such factors as socioeconomic status, learning disabilities, English language skills, etc. and spending decisions are decentralized down to the school level (usually to the principal). Fordham's 2006 Fund the Child manifesto describes these principles in some depth, but the Yearbook is the first comprehensive examination of the 14 school districts currently using versions of WSF. Most useful is a checklist of ten "School Empowerment Benchmarks" to describe how WSF is being used in each locale, including fund allocation, principal autonomy, and school choice. (New York City and Hartford, CT have all ten elements; none of the 14 has fewer than six.) If there's one lesson to be learned, it's that WSF varies widely from place to place. For example, only NYC, Hartford, and Oakland have managed the difficult feat of using actual (instead of average) teacher salaries in their budgets; this is an important way of addressing school-level resource inequities. (And NYC is using a complicated method to phase this in since existing teachers are grandfathered in.) Other interesting differences: Baltimore and NYC weight funding based in part on individual student achievement at the point when they enter a school; Houston includes a "mobility" factor to support schools with transient populations; and Oakland eschews weights by student type in favor of weights by grade level and relies on categorical funds like Title I to provide additional support for poor students (of which Oakland has many). Perhaps a future version of the Yearbook benchmarks could identify the types of weights and mention the extent of the program (some cities are full-fledged, others are just piloting WSF). But the current version of the benchmarks offers useful lessons on how WSF can be applied in other districts and on the state level (something we've encouraged in Ohio) and provides a detailed explanation of the history and status of this important reform in all 14 districts. It's available online, here.
Note: This Short Review has been changed to reflect a small correction since publication.
Jessica S. Howell, Michal Kurlaender, and Eric Grodsky
California State University, Sacramento
April 2009
What if you could find out that you aren't academically ready for college when there's still time left in high school to do something about it? That was the goal of the California Early Assessment Program (EAP), administered (voluntarily) to that state's high school students, with an assessment component offered at the end of the junior year. Did it help? Does providing early-warning information really get students better prepared for college? This study examines data from freshmen at California State University's Sacramento campus (CSUS) who were juniors in high school between 2001 and 2005. Since EAP did not become available until the 03-04 school year, this allowed the researchers to compare not only the effect of EAP on the need for remediation but also the effect of EAP on likelihood to enroll in college. On the first count, those who had participated in EAP were 6.2 percentage points less likely to need remediation in English and 4.3 points less likely to need it in math when they attended a typical CSU campus (this in a university system where 60 percent of freshmen require remediation!). Particularly troubling is that all of these remediated CSU freshmen completed the college preparation high school curriculum (i.e. they had presumably taken all the classes they needed to be college-ready) and had an average 3.1 GPA in the remediated subject. On the second point, participation in EAP did not make students more or less likely to apply to CSUS. The authors advise that more research is needed to determine whether EAP changes student behavior senior year by encouraging students who are marginally unprepared ("conditionally exempt" from remediation) to step up their academic game. You can read the report here.
Anna Nicotera
National Alliance for Public Charter Schools
April 2009
This fifth iteration of NAPCS's annual review of charter-achievement literature doubles the number of studies evaluated in the previous edition (to 140 from 70), taking into account a much wider scope of charter school studies. The Alliance adopted improved search methods this time around that revealed a host of previously undiscovered research sources from which to choose. Of particular note, several high quality charter studies (like this one and this one) were released in the last year--and found for the most part that students attending charter schools were achieving at significantly higher levels and more likely to attend college than students attending traditional public schools. Unfortunately, charter school studies in general lack methodological rigor, most notably the use of longitudinal data. Only 33 of the 140 studies met the Alliance's metrics for high quality ("panel studies")--and those were conducted in just 15 states, while 25 others have charter schools, too. Fifty-three of the 140 examined student achievement over time but did not use student-level data ("cohort change studies"); 70 used snapshot data, i.e., data from only one point in time ("snapshot studies"). Charter research, notes the report, is simply not keeping up with the sector's growth. And with all the innovative and informative practices happening in these educational incubators, it is more important than ever to have more complete information on their performance. Read the entire report here.
Remember how many analysts now say that improving teacher quality is loads more important than reducing class size? Well, famed Columbia sociologist Herbert J. Gans must not have gotten that memo. This week, he urges President Obama to think long term with his stimulus dollars, specifically, you guessed it, to shrink class sizes. Why? According to Gans, it would improve teacher performance, create more teaching (and construction) jobs, encourage "more equality," advance "democracy," and remove the "political glamour" of charter schools and vouchers. What happened to student achievement? Seems not to matter much to Gans: "Whatever reduced class sizes would do for students, the teaching profession might reap the greatest benefits." Let's put aside for a second the largely-agreed-upon fact that reduced class sizes are super expensive and will do nothing for students, save perhaps for the youngest tots. What's more troubling is Gans's apparent belief that federal policies should serve the adults working in the public school system and not the children served by it. Pooh on him.
"President Obama: Time for a Federal Small-Class Program," by Herbert J. Gans, Education Week, April 27, 2009 (subscription required)
Within the education establishment, it's taken as an article of faith that schools should face budget cuts only after all other options have been exhausted. How about public safety? That's the debate playing out in Prince George's County, Maryland, a big Washington suburb now facing a massive budget shortfall. County leaders would like to spend less money on schools this year than is required by state law. "In a crisis like this, all of us must share the burden," said the County Executive. "This is absolutely critical for the totality of the well-being of our county." Ironically, he had to plead his case before the State Board of Education, which has jurisdiction over the matter. Not surprisingly, the district superintendent disagrees, fretting about an "adverse impact on the school system's ability to progress." It's true that PG County's schools have made big gains in recent years, and it would be a shame to see the momentum stall. But then again, unfought fires and unpoliced neighborhoods aren't so great, either.
"County Pits Schools Against Safety," by Nelson Hernandez, Washington Post, April 28, 2009
The Advanced Placement (AP) Program is enjoying a growth spurt in the United States. Over the past five years, the number of high-school students taking at least one AP exam increased more than 50 percent. There's probably no education program in America that's been expanding faster.
This is indisputably a good thing, right? After all, even our notoriously tough reviewers of state standards and curricula found AP generally worthy of gold star status. Furthermore, studies have shown that even when students score poorly on the exam (earning a 2) and don't receive college credit, they still achieve higher average GPAs in college than their non-AP peers (when matched on SAT scores and family income).
But isn't it possible that the opening up and rapid democratization of AP might jeopardize its quality, perhaps adversely affecting the education of the top students who are most capable of tackling rigorous academic work? Are their AP courses being subtly "dumbed down" as more and possibly less-prepared students flock into them?
We set out to investigate this question, with the help of the FDR Group and the Templeton Foundation, by asking AP teachers themselves what they see happening to the program. The result is our latest study, Growing Pains in the Advanced Placement Program: Do Tough Trade-offs Lie Ahead?
In a nutshell, we find their views about AP growth to be highly conflicted, mostly positive toward the program's expansion but tinged with concern that the quality of the AP student body is diminishing. "A little more gatekeeping, please," is one message we heard, albeit faintly.
AP teachers are mostly satisfied with the overall quality of the program's curriculum and courses. Most say these bedrocks have stayed fairly consistent, even during a time of rapid expansion. Granted, the survey's respondents (and focus-group participants) have "skin in the game," since they are themselves guardians of this respected, even iconic, program. Yet the findings also reveal a schism in how AP teachers view the program's democratization.
We asked them to choose whether it's better to open up the program to all students to widen opportunity or limit it to those high school pupils most capable of meeting its demands. The majority (52 percent) prefer to allow into AP classes only those students deemed able to handle the material. But a large minority (38 percent) would admit more students who want to take the classes, even if they do poorly. (The rest didn't have a clear answer to that one.) It's not an overwhelming margin but does indicate that more teachers are concerned about an open-doors policy than are eager to embrace it.
Some influential folks, however, would forge ahead with AP democratization regardless of teacher concerns. Veteran Washington Post education reporter Jay Mathews is one of them. Years ago, he devised the annual "Challenge Index," a ranking of the country's best high schools published in Newsweek. He calculates it mostly by dividing the number of AP tests taken by the number of graduating seniors.
This might seem insignificant (it's only one metric, after all) but roughly 40 percent of the teachers who are familiar with this ranking say it's had some impact on their school's approach to AP. Mathews insists that AP courses should be made available to all students on the grounds that it is a good program and its rising tide will lift all boats. We're not so sure. Boats that aren't properly moored can capsize or sink when the tide rises.
One thing is for sure, though: The College Board agrees with Mathews. For years, it has beaten the equitable-access drum, routinely tracking what it calls an "equity and excellence gap." "True equity," the Board explains, "is not achieved until the demographics of AP participation and performance reflect the demographics of the nation." That's surely an admirable goal. But what happens when schools don't prepare students to handle the AP challenge? To its credit, the Board maintains that "All willing and academically prepared students deserve the opportunity to succeed in rigorous, college level experiences." But therein lies the rub: Are all willing students also academically prepared?
On the one hand, the percentage of the 2008 high school graduating class scoring at least one 3 on an AP test rose to 15 percent, up from 12 percent in 2003. But on the other, the percentage of all AP exams receiving grades of 3 or higher declined from 62 percent to 58 percent, and the mean score slipped from 2.96 to 2.85.
That's neither a ringing endorsement of, nor a fatal flaw in, the more-open-doors policy. Still, it'll be worth watching to see if mean scores continue to inch down over the next few years in the aftermath of widening AP access.
Unfortunately, national (and limited state-level) AP data cannot answer the myriad impact and efficacy questions that deserve attention. We know dreadfully little about the impact of the AP Program on important student outcomes, much less that of a more-open-doors policy on the program and its student outcomes. Mostly that's because the College Board has been distressingly tight-fisted with its data. They will not grant requests for school-level and student-level data; one must get permission from individual states and schools. And that's nearly impossible as a research protocol, particularly when one is interested in examining a critical mass of participating schools. (We understand well the need to protect a school's--and especially a pupil's--identity, but there's no reason both can't be assigned unique identifiers to shield privacy.) Surely, the larger research and policy community would benefit from its own "open doors" policy when it comes to AP data--and the country would benefit from more nuanced slicing and analysis of those data by non-College Board researchers.
America has enjoyed much success in ensuring that the AP program is available to more students, including the disadvantaged, but we'd be wise now to make sure that further growth is judicious, not foolhardy. If tough choices have to be made, who will (or should) benefit more in the long run--pupils deemed best able to handle the rigors of AP or those less able but willing to take the plunge? Will the warning signs identified by teachers (e.g., students in over their heads) lead to eventual watering down or beefing up of the program? These are the questions that will be essential to answer as we move forward.
What's the best way to improve a negative perception? Change the reality feeding it. That's the constructive tack being taken by the new leaders of the Arizona Charter Schools Association (ACSA) as they crack down on their state's surfeit of low-performing charters. Arizona has long been known as the "Wild West" of the charter movement, what with its laissez-faire "let anyone try it" approach to launching these schools. Some have worked brilliantly, others not. Now, thanks to the leadership of ACSA, a new data model will track student scores on the state test (AIMS) to determine growth, a vast improvement over the current snapshot evaluations of performance. This will arm the state charter board, which to date has only shut down 18 schools in the movement's 15-year history--peanuts compared to the 475 still open. But now lots of schools are coming up for renewal, providing a rare opportunity to weed out the worst performers. Rebecca Gau, the association's chief researcher (and author of several Fordham reports), explains that "the market wasn't going to take care of this." Indeed. It's refreshing to see a charter association that will. (Especially since we know some other states where this should be happening today but isn't.)
"New tests sought for Arizona's charter schools," by Pat Kossan, The Arizona Republic, April 26, 2009
Anna Nicotera
National Alliance for Public Charter Schools
April 2009
This fifth iteration of NAPCS's annual review of charter-achievement literature doubles the number of studies evaluated in the previous edition (to 140 from 70), taking into account a much wider scope of charter school studies. The Alliance adopted improved search methods this time around that revealed a host of previously undiscovered research sources from which to choose. Of particular note, several high quality charter studies (like this one and this one) were released in the last year--and found for the most part that students attending charter schools were achieving at significantly higher levels and more likely to attend college than students attending traditional public schools. Unfortunately, charter school studies in general lack methodological rigor, most notably the use of longitudinal data. Only 33 of the 140 studies met the Alliance's metrics for high quality ("panel studies")--and those were conducted in just 15 states, while 25 others have charter schools, too. Fifty-three of the 140 examined student achievement over time but did not use student-level data ("cohort change studies"); 70 used snapshot data, i.e., data from only one point in time ("snapshot studies"). Charter research, notes the report, is simply not keeping up with the sector's growth. And with all the innovative and informative practices happening in these educational incubators, it is more important than ever to have more complete information on their performance. Read the entire report here.
Jessica S. Howell, Michal Kurlaender, and Eric Grodsky
California State University, Sacramento
April 2009
What if you could find out that you aren't academically ready for college when there's still time left in high school to do something about it? That was the goal of the California Early Assessment Program (EAP), administered (voluntarily) to that state's high school students, with an assessment component offered at the end of the junior year. Did it help? Does providing early-warning information really get students better prepared for college? This study examines data from freshmen at California State University's Sacramento campus (CSUS) who were juniors in high school between 2001 and 2005. Since EAP did not become available until the 03-04 school year, this allowed the researchers to compare not only the effect of EAP on the need for remediation but also the effect of EAP on likelihood to enroll in college. On the first count, those who had participated in EAP were 6.2 percentage points less likely to need remediation in English and 4.3 points less likely to need it in math when they attended a typical CSU campus (this in a university system where 60 percent of freshmen require remediation!). Particularly troubling is that all of these remediated CSU freshmen completed the college preparation high school curriculum (i.e. they had presumably taken all the classes they needed to be college-ready) and had an average 3.1 GPA in the remediated subject. On the second point, participation in EAP did not make students more or less likely to apply to CSUS. The authors advise that more research is needed to determine whether EAP changes student behavior senior year by encouraging students who are marginally unprepared ("conditionally exempt" from remediation) to step up their academic game. You can read the report here.
Lisa Snell
Reason Foundation
2009
This hefty publication from the Reason Foundation makes an important contribution to our understanding of weighted student formulas (or WSF, a.k.a. weighted student funding or student-based budgeting). Under WSF, generally speaking, education dollars follow students via formulae that take into account such factors as socioeconomic status, learning disabilities, English language skills, etc. and spending decisions are decentralized down to the school level (usually to the principal). Fordham's 2006 Fund the Child manifesto describes these principles in some depth, but the Yearbook is the first comprehensive examination of the 14 school districts currently using versions of WSF. Most useful is a checklist of ten "School Empowerment Benchmarks" to describe how WSF is being used in each locale, including fund allocation, principal autonomy, and school choice. (New York City and Hartford, CT have all ten elements; none of the 14 has fewer than six.) If there's one lesson to be learned, it's that WSF varies widely from place to place. For example, only NYC, Hartford, and Oakland have managed the difficult feat of using actual (instead of average) teacher salaries in their budgets; this is an important way of addressing school-level resource inequities. (And NYC is using a complicated method to phase this in since existing teachers are grandfathered in.) Other interesting differences: Baltimore and NYC weight funding based in part on individual student achievement at the point when they enter a school; Houston includes a "mobility" factor to support schools with transient populations; and Oakland eschews weights by student type in favor of weights by grade level and relies on categorical funds like Title I to provide additional support for poor students (of which Oakland has many). Perhaps a future version of the Yearbook benchmarks could identify the types of weights and mention the extent of the program (some cities are full-fledged, others are just piloting WSF). But the current version of the benchmarks offers useful lessons on how WSF can be applied in other districts and on the state level (something we've encouraged in Ohio) and provides a detailed explanation of the history and status of this important reform in all 14 districts. It's available online, here.
Note: This Short Review has been changed to reflect a small correction since publication.