Student Selection, Attrition, and Replacement in KIPP Schools
KIPP schools shine even under rigorous evaluation
KIPP schools shine even under rigorous evaluation
Skeptics of charter schools have argued that the impact of charters on student performance can be attributed to their ability to “skim motivated students.” That is to say, in simplified terms: high-flying charters are high-flying, not because of their ability to educate kids more effectively than their traditional public school peers, but because of their ability to attract and retain cream-of-the-crop students.
Mathematica Policy Research, a top-notch policy and program evaluator, has recently looked into this precise question, using KIPP charter schools as its guinea-pig. KIPP, with its 125 schools in 39 states and the District of Columbia, is one of the largest charter school networks in the U.S. In Ohio, it operates one middle school in Columbus (sponsored by Fordham).
The researcher ask whether KIPP schools have (1) higher rates of attrition of its low-performers, compared to their district school peers; and (2) whether KIPP schools have higher rates of “late arriving” high-performers, compared to their district counterparts. If the researchers find higher rates of attrition or late-arriving, one could infer that the positive impact that KIPP charters have on student performance (found here, here, and here) is a function of selection and (de-selection) of students rather than of KIPP’s educational approach.
When the researchers compared attrition rates, they found no significant difference. KIPP schools lost students of the same quality, at the same rate as their district counterparts; in fact, both tended to lose mostly low-achievers. So with respect to attrition: check, one cannot attribute KIPP’s positive impact to its ability to “lose” or “de-select” lousy students. When the researchers compared late arrivals, KIPP schools did attract a higher proportion of high-achieving students than did district schools. So the evidence is less clear: some of KIPP’s positive impact might be attributable to the late-arrival of smart youngsters.
The researchers conclude that, despite KIPP’s higher rate of late-arriving, high-performers, one cannot attribute KIPP’s “large cumulative impacts on student achievement” to this factor alone. (This would be mediated through “peer effects.” Think about it like this: studying with more “smart” people should make one smarter.) The bottom-line, then: in light of rigorous evaluation, KIPP continues to shine. The research here indicates to even the severest skeptic that KIPP’s educational approach—no excuses, more time, etc.—has a measurable, positive impact on student learning.
SOURCE: Ira Nicholas-Barrer, Brian P. Gill, Phillip Gleason, and Christina Clark Tuttle, Student Selection, Attrition, and Replacement in KIPP Middle Schools, (Washington D.C.: Mathematica Policy Research, September 2012).
This report analyzes the change from compliance to performance management in eight state education agencies (SEAs). The researchers analyzed these SEAs purposely, because they are each taking greater responsibility for the education outcomes of students in chronically poor performing schools. The SEAs under analysis were Florida, Indiana, Louisiana, Michigan, Minnesota, New Jersey, Rhode Island, and Tennessee.
These states shared some common elements including using data, restructuring their SEAs, use of clear and transparent communication, establishing a sense of urgency, leveraging federal funding threats, and relying on strong leadership. The states differ in their approach to implement the change and, most importantly, how they view the role of local education agencies (LEAs).
Some other characteristics that determine the state strategy for making dramatic changes include how the SEA leader and state boards are selected, if they receive federal money from school improvement or innovation funds and if they have the legislative authority to take over low performing schools.
The SEAs studied have shifted their emphasis from federal regulation monitoring and compliance to organizations focused toward achieving goals and strengthened accountability in troubled LEAS. The researchers group the SEAs’ reform strategies into three descriptors:
Which SEA organization transformations succeed, in the future, is yet to be seen. It will be dependent on their abilities to maintain strong leaders and recruit the human capital able to make significant improvements in the most troubled schools.
SOURCE: Patrick Murphy and Lydia Rainey, Modernizing the State Education Agency: Different Paths Toward Performance Management (Seattle: Center for Reinventing Public Education, September 2012).
While most people have decided school teachers need to be evaluated, not all have considered the various uses for teacher evaluations. Eric S. Taylor of Stanford University and John H. Tyler of Brown University do just that in their research study, which directly assesses how mid-career teacher evaluations can assist in professional development.
The authors explore whether teacher evaluations can be used as more than a sorting mechanism that weeds out “good” from “bad” teachers. They do this by measuring fourth through eighth grade teacher’s impact on math scores. An individual teacher’s students are examined during the year of the evaluation. Those same students’ scores are also examined before and after having that teacher, to get a better picture of the instructor’s impact.
Results indicate that schools can use well-designed evaluations to improve teacher performance. Teachers who went through evaluations developed skills and changed behaviors that benefitted students a year after leaving that teacher’s classroom. To see these gains, however, the evaluation must be a constructive “practiced-based assessment that relies on multiple, highly structured classroom observations.”
Cincinnati Public Schools uses the constructive evaluation described above for its Teacher Evaluation System, which is expected to cost between $1.8 and $2.1 million per year. While financially costly, teacher evaluation systems may be able to help our lower performing professionals improve in ways that last.
SOURCE: Eric S. Taylor and John H. Tyler, “Can Teacher Evaluation Improve Teaching,” Education Next 12, no. 4 (2012).
As local school districts prepare to implement the state’s new third-grade reading guarantee, many are bemoaning the increased costs associated with providing more reading assessments and interventions to struggling K-3 readers (as required by law) and retaining more kids. The Ohio School Boards Association called the new law, and specifically its reporting requirements, “an unfunded mandate.”
The legislature did dedicate $13 million in competitive funding to support the new mandate, and last week the State Board of Education mulled recommending $105 million to support the law in the Ohio Department of Education’s FY2014-15 budget request. But would more money make a difference? Let’s take a look at the relationship between funding and reading achievement in the past.
Ohio had a reading guarantee on the books more than a decade ago (it was watered down before taking effect). At that time, with a governor (Taft) who had taken on improving early literacy skills as a primary policy objective and with the state coffers flush, Ohio poured millions into literacy improvement programs and professional development for teachers (via programs like OhioReads, the State Institutes for Reading Instruction, adolescent literacy grants, and summer intervention programs – to say nothing of federally funded efforts like Reading First). Chart 1 shows state funding for literacy improvement initiatives and reading professional development, from FY2000-01 (Governor Taft’s first budget) to FY2012.
Chart 1: Dedicated state spending on literacy improvement initiatives and professional development (FY2000 to FY2012)
Source: Legislative Services Commission, Budget in Detail and Budget Final Fiscal Analysis, FY2000-01 to FY2012-13, accessed 9/19/12; includes line items GRF 200- 433, 445, 450, 513, 551, and 566 (minus unrelated set-asides).
At the peak in 2003, Ohio was spending more than $90 million dollars to support young readers in the schools. Governor Taft’s pet initiative, OhioReads, was sending more than $30 million directly to schools to support early reading efforts and had more than 50,000 community members statewide volunteering in some 1,600 schools as reading tutors. And student achievement was on the rise – the statewide passage rate on the fourth-grade reading test had increased 10 percentage points in four years, to 66.3 percent. Chart 2 shows the statewide proficiency rate on the fourth-grade reading test from the 1999-2000 school year to 2010-2011.
Chart 2. Statewide fourth-grade reading proficiency rate, 1999-2000 to 2010-11
Source: Ohio Department of Education interactive Local Report Card, accessed 9/19/12
The gains were impressive in the early years after the state focused in on helping students learn to read: After five years, the state’s pass rate had increased 20 percentage points. Then progress stalled and the pass rate inched up just seven more points over the next six years. Funding dropped big-time after 2003 (at the hands of both Republican and Democratic governors) and eventually zeroed out in 2011. Did that drop stall students’ reading achievement gains?
Perhaps. However, much of what the state funded in the early 2000s didn’t “disappear” with the dollars. Teachers could still use what they learned during professional development. Books, computer programs, and other student-reading supports were still in the schools. And community volunteer tutoring programs continued without state funding in many schools until recent years.
I’m not sure I believe that the state should invest much more money toward the guarantee. Teaching kids to read is one of the most fundamental jobs of our public schools, a primary reason why they exist. And though this law is new, the state made the importance of teaching reading clear more than twelve years ago.
Yet, the data here seem to indicate that an infusion of well-targeted money early on could spur a boost in achievement. If state leaders do opt to fund the reading guarantee, they should do so in the short-term – give districts money to ramp up their K-3 reading improvement efforts but make it known that the money is going away in X number of years.
And, as an editorial in the Columbus Dispatch points out, that doesn’t necessarily mean infusing “new” money into the system:
… a major infusion of new cash might not be needed. Florida, which has seen academic performance rise steadily since introducing its third-grade guarantee in 2003, paid for much of the new intervention services with funds that were diverted from other education programs.
Ohio lawmakers should consider such re-prioritizing when a new budget is hashed out next spring. They also should learn from this first year of the program, in which schools will get a better idea of what they need to carry out the guarantee. The legislature should tweak the law as schools learn more about implementing it.
Schools have no lack of high-priority needs, but few are as important as ensuring that children can read proficiently. Research shows that students who begin fourth grade with sub-par reading skills are more likely to fail in later grades and eventually drop out. That’s a lot more destructive to a student’s future than repeating the third grade.
A college political science professor of mine once used this analogy to understand politicians: “There are two types of politicians: the ‘show ponies’ and the ‘workhorses.’” The show ponies, he would say, are politicians who love—and seek—the limelight. They’re the Fox News politicians. The workhorses, in contrast, are the politicians who memorize an assembly’s rules and grind away at legislative writing.
The Windy City is the moment’s education show pony. The drama of Chicago’s teachers’ strike, chalk-full of a furious teacher’s union, the tough-talking mayor Rahm Emmanuel, and the veil of presidential politics have shone the spotlight on Chicago. For four days during the week of September 11 to 17 the strike made the front page of The New York Times. As theatrical show—yes, with some substance to boot—one cannot get much better than Chicago, September 2012. (Since this original publication of this article, the strike has ended.)
While the show’s been going on in Chicago, the workhorses of Ohio continue to plow ahead. In Dayton, education leaders are working toward higher quality charter schools, are implementing blended learning models into their classrooms, and are worrying about a fair and efficient school funding plan. In a Sunday news article, the Dayton Daily News highlighted the DECA charter schools, which includes a newly-opened elementary school (sponsored by Fordham) and a high school. DECA serves mostly economically-disadvantaged students from inner-city Dayton; yet, despite this challenge, the school received the state’s highest rating, “Excellent with Distinction” (A+), on its 2010-11 report card—the last year ratings were given to Ohio’s schools.
Also in Sunday’s paper was a recap of a recent roundtable moderated by the Dayton Daily News, during which local educators and education stakeholders discussed the hottest schooling topics. The conversation, which included Fordham’s Terry Ryan, revolved around issues in blended learning, school choice, and school funding. Springfield Local School District superintendent David Estrop, for example, spoke about the opening of a district-sponsored virtual school. The school offers instructional choices that include traditional teacher-led instruction, blended learning, and online courses. For districts like Springfield to survive, Estrop asserts, “it’s innovate or perish.”
Meanwhile, in Columbus, the state auditor continues his investigation of schools’ tampering with student attendance records. The auditor’s office has found that as many as 50,000 student test scores—equivalent to the size of Columbus City Schools—were excluded from a schools’ performance report card. The auditor’s office is presently investigating which scores were legitimately excluded and which were not. In a September 16 article, the Columbus Dispatch revealed further the extent of the alleged fraud. The article indicates the possible principal involvement in fraudulently removing student records in a few of Columbus City Schools’ buildings. The auditor’s tedious and costly investigation of serious student attendance record fraud should cause major reform in how schools enter, control for, and report accountability data.
Though the education world’s eyes have been fixated on Chicago, let’s not forget that significant and important education policy changes are happening in our own backyard. Ohio’s workhorses—whether they’re charter school leaders like Judy Hennessey of DECA, public school superintendents like David Estrop, or the host of investigators from the auditor’s office—are doing the yeoman’s work of making better quality education possible for more of the Buckeye State’s students.
Exam Schools, by Fordham president Chester E. Finn, Jr. and Jessica A. Hockett, explores the realm of America’s most selective and highest performing public high schools. The authors identify 165 “exam schools,” so-called because their admissions process is largely based on students’ exam scores.
Four of these schools are located in Ohio: John Hay Early College High School, John Hay School of Architecture, John Hay School of Science and Medicine, and Walnut Hills High School. The John Hay schools (combined enrollment, 852) are all part of Cleveland Metropolitan School District and Walnut Hills High School (enrollment, 2,149) is part of Cincinnati Public Schools. The Ohio Department of Education rated each of these schools “excellent” (A) for the 2010-11 school year.
For a synopsis of the book’s findings, check out The New York Times editorial “Young, Gifted, Neglected.” And don’t forget to take a look at the video below or get your copy of the book to learn more about this exciting slice of American (and Buckeye State) education!
Skeptics of charter schools have argued that the impact of charters on student performance can be attributed to their ability to “skim motivated students.” That is to say, in simplified terms: high-flying charters are high-flying, not because of their ability to educate kids more effectively than their traditional public school peers, but because of their ability to attract and retain cream-of-the-crop students.
Mathematica Policy Research, a top-notch policy and program evaluator, has recently looked into this precise question, using KIPP charter schools as its guinea-pig. KIPP, with its 125 schools in 39 states and the District of Columbia, is one of the largest charter school networks in the U.S. In Ohio, it operates one middle school in Columbus (sponsored by Fordham).
The researcher ask whether KIPP schools have (1) higher rates of attrition of its low-performers, compared to their district school peers; and (2) whether KIPP schools have higher rates of “late arriving” high-performers, compared to their district counterparts. If the researchers find higher rates of attrition or late-arriving, one could infer that the positive impact that KIPP charters have on student performance (found here, here, and here) is a function of selection and (de-selection) of students rather than of KIPP’s educational approach.
When the researchers compared attrition rates, they found no significant difference. KIPP schools lost students of the same quality, at the same rate as their district counterparts; in fact, both tended to lose mostly low-achievers. So with respect to attrition: check, one cannot attribute KIPP’s positive impact to its ability to “lose” or “de-select” lousy students. When the researchers compared late arrivals, KIPP schools did attract a higher proportion of high-achieving students than did district schools. So the evidence is less clear: some of KIPP’s positive impact might be attributable to the late-arrival of smart youngsters.
The researchers conclude that, despite KIPP’s higher rate of late-arriving, high-performers, one cannot attribute KIPP’s “large cumulative impacts on student achievement” to this factor alone. (This would be mediated through “peer effects.” Think about it like this: studying with more “smart” people should make one smarter.) The bottom-line, then: in light of rigorous evaluation, KIPP continues to shine. The research here indicates to even the severest skeptic that KIPP’s educational approach—no excuses, more time, etc.—has a measurable, positive impact on student learning.
SOURCE: Ira Nicholas-Barrer, Brian P. Gill, Phillip Gleason, and Christina Clark Tuttle, Student Selection, Attrition, and Replacement in KIPP Middle Schools, (Washington D.C.: Mathematica Policy Research, September 2012).
This report analyzes the change from compliance to performance management in eight state education agencies (SEAs). The researchers analyzed these SEAs purposely, because they are each taking greater responsibility for the education outcomes of students in chronically poor performing schools. The SEAs under analysis were Florida, Indiana, Louisiana, Michigan, Minnesota, New Jersey, Rhode Island, and Tennessee.
These states shared some common elements including using data, restructuring their SEAs, use of clear and transparent communication, establishing a sense of urgency, leveraging federal funding threats, and relying on strong leadership. The states differ in their approach to implement the change and, most importantly, how they view the role of local education agencies (LEAs).
Some other characteristics that determine the state strategy for making dramatic changes include how the SEA leader and state boards are selected, if they receive federal money from school improvement or innovation funds and if they have the legislative authority to take over low performing schools.
The SEAs studied have shifted their emphasis from federal regulation monitoring and compliance to organizations focused toward achieving goals and strengthened accountability in troubled LEAS. The researchers group the SEAs’ reform strategies into three descriptors:
Which SEA organization transformations succeed, in the future, is yet to be seen. It will be dependent on their abilities to maintain strong leaders and recruit the human capital able to make significant improvements in the most troubled schools.
SOURCE: Patrick Murphy and Lydia Rainey, Modernizing the State Education Agency: Different Paths Toward Performance Management (Seattle: Center for Reinventing Public Education, September 2012).
While most people have decided school teachers need to be evaluated, not all have considered the various uses for teacher evaluations. Eric S. Taylor of Stanford University and John H. Tyler of Brown University do just that in their research study, which directly assesses how mid-career teacher evaluations can assist in professional development.
The authors explore whether teacher evaluations can be used as more than a sorting mechanism that weeds out “good” from “bad” teachers. They do this by measuring fourth through eighth grade teacher’s impact on math scores. An individual teacher’s students are examined during the year of the evaluation. Those same students’ scores are also examined before and after having that teacher, to get a better picture of the instructor’s impact.
Results indicate that schools can use well-designed evaluations to improve teacher performance. Teachers who went through evaluations developed skills and changed behaviors that benefitted students a year after leaving that teacher’s classroom. To see these gains, however, the evaluation must be a constructive “practiced-based assessment that relies on multiple, highly structured classroom observations.”
Cincinnati Public Schools uses the constructive evaluation described above for its Teacher Evaluation System, which is expected to cost between $1.8 and $2.1 million per year. While financially costly, teacher evaluation systems may be able to help our lower performing professionals improve in ways that last.
SOURCE: Eric S. Taylor and John H. Tyler, “Can Teacher Evaluation Improve Teaching,” Education Next 12, no. 4 (2012).