Eva et al. flunk the fairness test
Not meeting high standards ? “failing.” Michael J. Petrilli
Not meeting high standards ? “failing.” Michael J. Petrilli
In the pre-Common Core era, we had a big problem. Most state tests measured minimal competency in reading and math. But we failed to communicate that to parents, so they reasonably thought a passing grade meant their child was pretty much where they needed to be. Little did they know that their kid could earn a mark of “proficiency” and be reading or doing math at the twentieth or thirtieth percentile nationally. Frankly, we lied to the parents of too many children who were well below average and not at all on a trajectory for success in college or a well-paying career.
Playing games with proficiency cut scores provided much of the impetus behind Common Core. States raised standards and started building tests pitched at a much higher level. Most states are giving those tests for the first time right now, though New York and Kentucky made the transition two years ago. As of 2013, New York’s tests were the toughest in the country, according to a new analysis by Paul Peterson and Matthew Ackerman in Education Next, matching—if not exceeding—the performance standards of the National Assessment of Educational Progress.
That may solve the “proficiency illusion” issue. But now we have a new problem. Some education reformers and media outlets are already using the results of the new, tougher tests to brand schools as “failing” if most of their students don’t meet the higher standards. Note, for instance, the Daily News’s special report, “Fight for their Future,” which leads with the provocative headline “New York City is rife with underperforming schools, including nearly two-thirds of students missing state standards.” This line of attack closely resembles the talking points of Eva Moskowitz and Jeremiah Kittredge of Families for Excellent Schools, who both promote the notion that in New York, “800,000 kids can’t read or do math at grade level” and “143,000 kids are trapped in persistently failing schools.”
These statements are out of bounds, and reformers should say so. They validate the concerns some educators voiced all along: that we would use the results of the tougher tests to unfairly label more schools as failures.
***
Let me be clear: I don’t mind calling schools out as “persistently failing.” Such schools exist, and they should be subject to aggressive interventions, including closure. And I’d be thrilled if they were replaced by high-performing charters like Eva’s Success Academies. But as I’ve argued ad nauseam (and the Shanker Institute’s Matthew Di Carlo has patiently and persuasively explained for years), evaluating schools based on proficiency rates alone is bad math. (Moskowitz and Kittredge define a “persistently failing school” as one in which 10 percent or fewer of the students are proficient in reading and math—or, in the case of high schools, where the same percentage or lower is testing at college-ready levels.)
That’s because passing rates have as much to do with the performance level of students when they enter the school as the amount of learning that happens once they are there. Particularly in a high-standards scenario, it’s quite possible for schools to help students make lots of growth and still not attain “proficiency.” Looking only at annual passing rates, and not where students start, is terrible practice. This is especially true for middle schools and high schools, where students can enter four years or more behind. There’s a non-trivial risk of branding as failures some schools that, despite low scores, are actually doing right by kids.
All Families for Excellent Schools needs to do is look for low-proficiency, high-growth schools. There probably aren’t that many of them. But subtracting these from the “failing schools” equation would go a long way toward demonstrating to educators that we aren’t playing a game of gotcha.
***
And what about the Daily News’s claim that “nearly two-thirds of students” are “missing state standards”? That’s true, but it’s also to be expected. New York set its cut score to align with its definition of “college and career readiness.” The result of this process was a cut score that defined proficiency at about the seventieth percentile. That might sound high, but it makes sense because we know from NAEP, ACT, SAT, and college remediation rates that only about 30 percent of high school graduates are truly college-ready—defined as being able to arrive on campus and succeed in credit-bearing courses from day one. (We have less information about how many are career-ready, though if we’re talking about careers that require any kind of technical skills or advanced training, it’s probably in the same range.) Ergo, most kids below the seventieth percentile in math or reading are probably not on track for college or a sustainable, well-paying career.
The hope is that, over time, more students will reach the standards as schools raise expectations and improve teaching and learning. (In other words, we’ll shift the bell curve to the right.) And as a result, more students will be college- and career-ready upon graduation from high school. But that’s not going to happen overnight.
***
The move to higher standards means that we need to recalibrate our rhetoric and, more importantly, our approach to school accountability. In the low-standards days, it was perfectly legitimate to call out schools that couldn’t get all or most of their students to minimal levels of literacy and numeracy. It simply doesn’t work to similarly defame schools that don’t get all of their students “on track for college and career.” It’s a much higher bar and a much longer road.
It’s utopian to think that most American children will master the Common Core standards immediately. It’s defeatist to think that schools can’t do anything to help their students make progress toward that lofty objective. And it’s disingenuous to take either of these extreme positions in this debate.
Editor's note: This article originally appeared in a slightly different form at Tim Shanahan's blog, Shanahan on Literacy.
Ladies and gentlemen, we're quickly sinking into the quicksand of yet another presidential campaign. I'm writing to help with the Common Core State Standards (CCSS) issue. I don't want any of you tripped up by a feeble or foolish argument, and there are lots of ways of doing that. I'm sure you all know not to rely on your thirteen-year-old kids for policy advice—and not to sigh audibly and roll your eyes, since it will look like you sent your thirteen-year-old to debate in your place. If you can't stare down a callow opponent successfully, how will you ever convince voters that you can handle Putin or ISIS?
I won't be so bold as to suggest what your position should be on Common Core, but I do have advice as to which arguments to avoid.
1. Previous educational standards were better.
Don't make this claim. It can only embarrass you (it's as bad as not being able to spell "potato"). Past standards were so low, they were the educational equivalent of everyone getting a tee-ball trophy. Many U.S. students met those standards and still needed basic reading, writing, and math instruction in the workplace or university—expensive places to obtain an elementary or secondary education. Anyone who argues against the CCSS should be able to explain why they want lower educational standards or else embrace a viable alternative. (Note to campaign managers: Parents who are paying for remedial college classes or employers who are struggling to hire high school graduates with basic skills may become particularly testy over this argument).
2. Teachers didn’t write them.
Yeah, and I’ve long been opposed to the Declaration of Independence because it was written by a slaveholder and the Gettysburg Address because its author was in the pocket of big business before assuming the presidency. This argument elevates the ad hominem over the ad verbum. All that should matter is whether the standards are sound; if they are, a House committee could have written them and they’d be a good idea. And if they are not sound, how many years of teaching experience would the authors require for you to campaign on them? Many teachers worked on these standards, but who cares? The standards could still be useful even if that weren’t the case.
3. They promote the theories of evolution and global warming.
Yikes. This is an interesting argument because everyone hates being tricked into supporting what they morally oppose. Unfortunately, it doesn't hold any water: The Common Core only deals with reading, writing, and math—and not with science, history, or any other school content or social issue. You may get away with this one, but there is always the risk that someone in the audience has actually read the standards.
4. The Common Core isn’t research-based.
That sounds like a good argument too—pin the standards on the science deniers. But what if someone wonders what a research-based goal would look like? I know I want my marriage to be happy, my kids to be productive, and my country to be secure. I don’t know why I’d need a study to tell me that I wanted those things. In medicine, they use research to figure out the best treatments—not whether we want everyone to be healthy. Standards aren't teaching methods; they aren’t approaches to instruction. When the critics say that some states should have tried these out first to find out if they're any good, it's like saying that some states should aim for 4 percent unemployment and others for 8 percent—so that we can know whether we want people to find jobs.
5. They require too much testing.
Common Core requires no more (or less) testing than any other educational standards. Since the early 1990s, federal law has required states to adopt their own educational goals and evaluate student progress against them. However, there’s nothing special about Common Core in that regard. If CCSS disappeared, states would still have standards, and they’d still have to monitor student progress—just as they have for the past twenty years. If you do choose to make this argument despite the facts, be careful in Alaska, Indiana, Oklahoma, South Carolina, Texas, and Virginia. None of them have Common Core, but they all have educational standards and are testing their students against those standards.
6. They are the reason for all of the test prep.
This is a great argument, and yet I doubt whether many of you have the thespian skills to pull it off. Test prep, though unsavory, has nothing to do with Common Core. Educators have long devoted unconscionable amounts of time and resources to test prep, with barely a peep from any of you. Now, getting all worked up about kids being engaged in test prep instead of education will require all the faux sincerity of Captain Renault (“Gambling in Casablanca? I’m shocked!”). What would happen to test prep if there were no Common Core? Look to Texas or Virginia for your answer, rather than to the airy pronouncements of your supposedly shocked and offended advisors.
7. Publishers are making money from them.
Publishers do make money from these standards. And if history is a guide, when we move on to the next big thing in education, they’ll make money off that, too. Government policies do help companies make money. But if that's an issue, then we ought to shut down the Defense Department, Medicare, Social Security, the oil depletion allowance, and pretty much everything else that government does—since all those nasty programs encourage the buying of goods and services from American companies. (Note to Jeb Bush: Perhaps your opponents' arguments against Common Core are really just a ruse to get schools to change their curricula more quickly and make even more money for the publishers.)
8. The U.S. Constitution bans national curricula.
This one is a particularly tempting argument, especially if you are a lawyer. The Constitution does relegate authority for education to the states, after all. The problem is that the federal government has always incented states in the area of education. Even a conservative Supreme Court has recently indicated that it will not even hear cases aimed at determining whether states must comply with federal law when they accept federal funding; they see it as settled law. Going before this Supreme Court to argue that Hamilton, Madison, and Jay knew nothing about the Constitution would likely be a tough slog (Justices Roberts and Alito can be sticklers about that kind of thing). The federal government has the right to require funded states to have standards—whatever standards they may choose to adopt—and there is nothing in Common Core that curtails that right in any way. You'll end up in the weeds. Avoid this one.
9. Common Core violates states’ rights.
This would be kind of a funny argument coming from people who are running not for governor, but for president. "If elected, I’ll not allow states to adopt Common Core." That makes it sound like under your presidency, educational goals would be under your authority. That won't be palatable even from such staunch conservatives as a President Cruz or a President Paul. The states, being sovereign entities, have the authority to coordinate with each other as much as they choose. This is true in transportation, criminal justice, economics, natural resources, etc. This argument snatches that authority from the states, and doing so in the name of states’ rights would be too tricky a game by half. Where is George Orwell when we need him?
10. These are President Obama’s standards.
Let's face it: It's always a good idea to run against an incumbent whose popularity is on the decline. And getting voters to believe that these standards represent “Obamacore” should be easy. When they were being written, Secretary of Education Arne Duncan promised funding to develop new tests for the new standards (a “shovel-ready project,” in the parlance of the times), and when running for president, Senator Obama campaigned on the idea that we needed higher standards and a lot more testing. Making voters believe that the Common Core belongs to the administration should be easy; if you can create enough of a haze of suspicion, voters might never figure out that these standards were written with no federal funding and no federal involvement. Of course, this will be an easier argument for some than for others. (Note to Bobby Jindal: You seem sincere in making this argument, but you'll probably need to explain why President Obama was able to operate you like a sock puppet on this issue for three years without you ever being aware of where his hand was. I would avoid using the term “brainwashing”—see George Romney, 1968. Perhaps you could get away with claiming that President Obama just gave yours a light rinse.)
Ladies and gentlemen, I wish you all luck and hope this advice is useful to each of you.
Tim Shanahan is a distinguished professor emeritus of urban education at the University of Illinois at Chicago.
“Failing” schools, data privacy, teacher evaluation in Virginia, and a flawed look at school funding disparities. Featuring a guest appearance by the Data Quality Campaign's Paige Kowalski.
Amber's Research Minute
SOURCE: Robert Hanna, Max Marchitello, and Catherine Brown, “Comparable but Unequal: School Funding Disparities,” Center for American Progress (March 2015).
Mike: Hello, this is your host, Mike Petrilli of the Thomas B. Fordham Institute here at The Education Gadfly Show and online at edexcellence.net. Now please join me in welcoming my co-host, the Kentucky Wildcat of education policy, Paige Kowalski!
Paige: Kentucky Wildcat?
Mike: That's their name, right? Kentucky Wildcats?
Paige: Sure. Who's they?
Mike: Okay, yeah.
Paige: To clarify, I'm from California.
Mike: All right. See, we always do a pop culture thing, a sports thing. Kentucky is not only a number one seed ... Now I'm talking about the men's basketball team here. Not only the number one seed for the March Madness Tournament, but considered so dominant that it's basically Kentucky against the entire field in terms of the odds. They are undefeated which almost never happens. I guess not only the tallest team in men's college basketball, they're actually taller than all but one NBA team.
Paige: Wow!
Mike: Yeah, so a bunch of tall guys.
Paige: I haven't seen the bracket yet.
Mike: You are dominant, okay. You have so little competition out there in education reform-land.
Paige: Yeah.
Mike: You're tall.
Paige: Oh, I am tall.
Mike: So, see? There you go.
Paige: I am tall, just like a Kentucky Wildcat apparently.
Mike: Exactly. Let me introduce you properly. Paige Kowalski is the Vice President for Policy and Advocacy at the Data Quality Campaign, known to those of us inside the Beltway as DQC and doing amazing work all over the country, working at the State level especially. It seems like states all over the place are passing bills, passing laws to protect kids' privacy.
Paige: Yes, states all over the place are passing those laws. Nothing has passed yet this year. Last year we had 21 states pass something so it remains to be seen where we end up after this session.
Mike: A lot of activity, clearly something that is a hot issue. Hey, we got a lot to talk about. We'll get into some data issues. We have some data questions today just for you, Paige, but they are timely as well. Let's get started. Ellen, let's play Pardon the Gadfly.
Ellen: Some education reformers and media outlets are now calling schools failing if they don't get most of their students to the new, higher Common Core Standards. Is that fair?
Mike: So Paige, I've got a new piece in this week's Education Gadfly that criticizes the New York Daily News as well as Families for Excellent Schools up in New York, Eva Moskowitz, because they are using the results from the new Common Core tests in New York which we now know from Education Next were as of 2013, the hardest tests in the country, basically set at the same level as NAEP. They're using those test to declare schools failing because they don't get very many kids to the proficient level in reading and math.
This gets us back into this never-ending debate about why proficiency rates are such bad measures of school performance especially when the proficiency bar is set really high. Do you agree with me or do you agree with those other people?
Paige: That's a loaded question, Mike. First, I'm shocked to find out you are releasing something that is critical of anything. I think first of all, what DQC would advocate is it should be growth data and not proficiency data.
Mike: Boom!
Paige: We want to see these longitudinal data systems that we've all invested so heavily in both financially and politically, really used for kids and to get better information out about how kids are doing. I think the failing label rubs everybody wrong. I think it gets advocates fired up but I think that was part of the confusion around No Child Left Behind and the lack of embracing from parents about that label as it never made sense to them based on a single test score.
I enrolled my kids in a school that never met and never had a hope of meeting AYP and since my school has switched to growth and a different model under a waiver, they're now at the top of the food chain in the accountability ratings.
I think at the end of the day, my two words are multiple measures. I think reading and math scores are going to be critical because if you can't read, then I don't know what the business of school is. We can learn that on Sesame Street so if schools can't get as good as Sesame Street can do then we have a problem.
Mike: Look, let me be clear, I don't personally have a problem with calling schools failing schools. I think sometimes that is legitimate. I just want to make sure that we identify them appropriately and to me, a failing school is one that both has low proficiency rates, low passing rates and isn't making any progress over time.
When I've looked at the data, at least say in Ohio where we do a lot of on the ground work, I'd say about 25% of the schools that look like they're low performing schools based on proficiency are actually look like they're doing a heck of job on growth ... 25%, that still means 75% of the schools really do seem to be brain-dead, these low performing schools, most of which are high poverty and probably should be put out of their misery, in my opinion.
It's not the failing that bothers me. It's that they call these schools failing because they've only got a handful of kids reaching these new higher standards. Look, you're a middle school. You're a high school. Kids are coming in four grade levels behind. You could be working miracles and still have zero percent of your kids passing the test. Sometimes, Paige, I just think people in education reform are bad at math.
Paige: I think that's fair. I don't know that you're going to make great strides out there with this case. I think it's a fair statement to make and I think, again, you're just making the case for multiple measures particularly at middle and high school. I think people need more information. I think people understand in an intuitive common sense way that these kids are coming in and they may be behind so what else can we look at to see if this school is doing these kids a service.
Mike: I'm down at the multiple measures. Look, the thing is this, a lot of educators were afraid that we were going to have to raise the standards, make the tests harder just to make them look bad. I feel like Eva and all, they're following in that playbook. It's particularly because they're using these scores from the harder test in New York and not providing any context.
Paige: Right.
Mike: When the standards were minimal, I think it was fair to say, "Hey, any school, you should at least be able to get your kids to minimal levels of literacy and numeracy." If you can't do that ... Come on. It doesn't work anymore. You can't use the tests in the same way anymore because they are not measuring minimum levels. They're set at a much higher level, arguably being set at the 70th percentile in a place like New York and so by definition, at least when we start this process, lots of kids aren't going to pass those test.
Paige: Okay, you say it's a minimal level, but it's the level we've identified that kids need to graduate at to go on to college or a job.
Mike: Right. I'm saying it's not a minimal level. It's a very high standard.
Paige: It is a high standard. It's higher but it's the standard that we've all agreed and set and said this is what an 18 year old needs to walk out of K-12 with. What I would argue for is DQC has long, advocated for richer data sets around kids and if we can actually link our K-12 records and our higher ed records and our workforce records and actually understand, hey, for these high schools where maybe only 40% of the kids scored what they need to score, it's not a failing school because 90% of the kids actually went on to a successful career or the military or post secondary and then went on to do this.
What is it about those kids who are not graduating college career ready which I may have been one and I still went to college and graduated and here I am with you today which is a measure of success in and of itself.
Mike: That ... So absolutely.
Paige: If we have this data, can we make better decisions about what to do and what's working in these high schools?
Mike: I love it, okay. Question number two ...
Ellen: The administration is advocating for new legislation around data privacy. Is it needed and does it go far enough?
Mike: What is it? What is it about? Tell us, Paige.
Paige: What's it about? That's a big question. First I'll start with saying that sort of in response and in a recognition of these legitimate concerns around student privacy, last week at South by Southwest, DQC and CoSN, the Consortium on School Networking, released a set of ten student data principles that sort of lay out what the education community believes in, what we believe about data use and the importance of safeguarding it.
Fordham is a signatory on that along with 32 other organizations to really represent what the education community believes about this. We're hearing the Federal government and States and locals talk about what should we do? How do we do this? We thought it was important to start all of these conversations with a consensus framework for those policies because there's a lot of fears that we may overreach in policy particularly at the federal level and that will stifle innovation, stifle ...
Mike: But the federal pieces went mostly about making sure that companies that collect private information about kids, that it clarifies what they're allowed to collect and how they have to make sure that that does not get breached. Is that fair to say?
Paige: That's part of it. That's the Student Digital Privacy act that the administration is talking about that we could see a draft coming out sometime soon but there is also talk of what should we do with FERPA and does COPPA need to be updated. COPPA only covers kids up to 13. FERPA ...
Mike: You just lost all of us on those but that's all right.
Paige: FERPA, that's our federal privacy law that covers how education agencies like schools and districts can and can't share personally identifiable student information.
Mike: Yeah. Are any of these efforts, are they going to hem in the Federal government itself? The administration has been very vocal about how private companies might be abusing the data. There's many people on the right who are worried that the Federal government could be abusing this data that they, for example, are asking sensitive questions through the office of Civil Rights that there needs to be more protections from the government.
Paige: Right, the Federal government doesn't actually have any individual student personal information.
Mike: Right.
Paige: They can't collect it due to several laws. Absolutely, there's a role for the Federal government to play to make sure that they're holding themselves accountable and that they're transparent for what they are and aren't collecting and what they can and can't do with it.
Mike: All right. Very good. Question number three ...
Ellen: Virginia is the latest State to debate whether to make teacher evaluation data public. Should that information be out there?
Mike: So, this is such a hard question, Paige. We have been tying ourselves in pretzels over this for the last few years. What's you take on this? I guess this is what the principal does, an evaluation of the teachers. It may include some student achievement data as a part of it. The question is whether the media, the press has a right to know the ratings of individual teachers. We've gone through this in other States. In some cases usually because of lawsuits, the media have ended up getting the data and getting the records. You can then look up in the newspaper, find out if your kid's teacher is effective or not. Good idea? Bad idea?
Paige: The specific case in Virginia really centers around a parent advocating that parents should have that information about their child's teacher. It doesn't really get into the media aspect of it.
Mike: Okay.
Paige: In another sense, it brings up an interesting question because in the past, it has been about the media and what should they have the right to. In this case, we think it's important. We need to have a better conversation about how to balance the rights of parents to have the information they need to insure that they can be good advocates for their children with the rights that teachers have around privacy to their personnel information.
Mike: So the answer is ...
Paige: The answer is we need to have that conversation because we haven't had it right now ... There is no right answer.
Mike: Paige, now come on!
Paige: Yeah, right now, right now we live in world where parents have no information about teachers.
Mike: In Virginia, do you think the parent has a case that they should tell her the evaluation score of her child's teacher?
Paige: I don't think that it will be helpful to get a value added measure which I believe is what this case is about because I don't believe they've actually evaluated teachers but they do have growth data and VAM scores. It's single data point and much in the same way that I look up when I'm buying a car, what's the safest rating out there. We don't all own Volvos.
Mike: Yup.
Paige: So why would you give a parent a VAM score? What are they going to do with it? What are they going to ... They can't make sense of that. It's a single data point and we should never be making any decision off a single data point.
Mike: That seems right to me. I certainly don't think these things should be printed in the paper, certainly not the VAM scores and even the final evaluations.
Paige: Absolutely.
Mike: Look, in most professions, in most lines of work, this is something that is between the employee and his or her manager. I think that is the appropriate place here. I understand as a parent wanting to know and of course, as parents, we try to figure this out from our friends and colleagues and you get whatever information that you can.
It feels like it gets to be more inside the management of the school than is healthy. You could imagine ... You just can imagine this could be really bad for morale and also lead to some inequitable results if you get the pushy, connected parents then using this to get the best teachers and leaving the bad ones for the other kids.
Paige: Absolutely, can absolutely exacerbate current inequities and how teachers are distributed as the public starts to find out, "Oh, look, all the best teachers are over there. Let's go get them and put pressure on those principals." Again, I think it goes back to what's actionable and what can a parent really do with that information. The system will always have teachers that are less effective.
I think we think one solution, both in having this balanced conversation, let's at least to the bare minimum, get aggregate data out by school on teacher effectiveness. Again, most places don't even have this data to put out yet. Also, getting parents better information about their own kids so that they can push and say, "Look, my child has been excelling four years straight and suddenly now they're dropping. What is the system going to do for my kid? Their teacher doesn't appear to be very good."
Mike: I like the aggregate data. I think we should make it very understandable for parents like 20% of teachers in this school totally kick ass, 40% are okay, and then 20% totally suck.
Paige: I like the kick ass and the totally suck rating.
Mike: Yeah, I think that's important. Hey, by the way, I have an idea for April Fool's Day, guys. I'm curious, how do home-schoolers evaluate their teachers? It think this is something we should look into that.
Okay, that's all the time we have for Pardon the Gadfly. Now it's time for everyone's favorite and I know Paige's favorite, Amber's Research Minute.
Amber, welcome back to the show.
Amber: Thank you, Mike.
Mike: Have you filled out your bracket yet?
Amber: I have not but it's on my to-do list. I'm definitely doing it.
Mike: Kentucky? You going to go for Kentucky?
Amber: I didn't come in last place last year which was great.
Mike: Hey, nice. Did I? I might have?
Amber: I did mostly a guessing game.
Mike: Yeah.
Amber: Yeah.
Mike: I go back and forth. I was like, "should I just fill out the ones I want to win?"
Amber: Yeah. I have no rhyme or reason. If you go with what the experts say, then you feel like you're kind of cheating, right?
Mike: You had that as cheat ...
Amber: You've got to go out on a limb.
Mike: Well, you can't just go out with all the top seeds.
Amber: I know, I know.
Mike: That's no fun.
Amber: That's no fun.
Mike: All right. What you got for us? Speaking of fun, this is going to be fun.
Amber: Fun ...
Mike: Because we are going to get to de-bunk a study put out by some friends of ours.
Amber: We are. I know, I know. They're still our friends.
Mike: Really?
Amber: Study out from the Center for American Progress called Comparable but Unequal School Funding Disparities. All right ... The exam is a comparability requirement in the ESEA which requires that school districts provide comparable education services in high and low poverty schools or non-Title 1, as a condition of receiving Title 1 dollars. Okay, just kind of get some of the definitions out of the way, right?
CAP's concern is though this requirement is intended to level the resource playing field between advantage and disadvantages schools, it actually allows districts to use teacher to student ratios or average teacher salaries as a proxy for comparable services instead of using actual expenditures on teachers' salaries.
You're going to have to unravel some of this stuff.
Mike: Basically the idea is the Federal government wants it's funds to be extra on top of ... They don't want ... Ideally in a district that all the schools get about comparable resources and then the federal money comes in on top of it so that the poor kids end up getting something extra.
Amber: Right.
Mike: Right. That's what this comparability requirement is about.
Amber: CAP's reasons that since poor schools typically have newer teachers and those teachers tend to struggle in their first few years that they're not only getting less qualified teachers but they're getting less money than more advantaged schools since the new teachers cost less money to employ in the first place.
They use Office of Civil Rights data for 2011-2012 for district spending on roughly 95,000 public schools. They compare how the districts fund the schools that are eligible to receive Title 1 with other schools in their grade span. The bottom line, they find vast disparities in how districts spend these dollars, okay? They adjust for school spending based on cost of living differences.
Three key findings ... Due to the loophole in the law, more than 4.5 million low income students attend inequitably funded Title 1 schools. 2) Said schools receive around $1,200 less per student than comparison schools in their districts. 3) If the loophole were closed, high poverty schools would receive around $8.5 billion in new funds every year.
Bottom line problem, I don't think the data can be trusted which is kind of a big deal. I think that the spending picture is actually more complicated than meets the eye when you really kind of dig into this.
In the first point, OCR are self reported by districts and they're not verified. There are actually other federal databases that are verified like the CCD data. It's called LC Now. It's actually verified. CAP, it says in a footnote that it's probable the districts may have filled out these forms using slightly different analytic approaches which I think is a vast understatement.
Mike: Right. Like including a wild-ass guess ...
Amber: Yes.
Mike: Was that the methodology they might have used?
Amber: They can't cross-reference the Civil Rights data collection with other school finance data set so in this footnote, they actually admit, "Okay, this is problematic." Districts can be all over the map in how they report these data which is a problem. We know just anecdotally and through other studies that districts tend to budget not using real dollars but by allocating people and services. You have this many people, you have this many programs ... Okay? That's a problem.
I ended up saying, "Okay, the better method which is one we used here at Fordham on a little D.C. study we did, is to use audited actual expenditures at the school level." Insure that they use actual, not annual salaries which you kind of got to make sure about that. You can go through FOYA. It's a big pain in the rumpus. We all know that but you can get each district school, employees title and salary. It's just harder to do.
On the second point and this where we get more in the weeds which is a couple more points, we can't assume the disadvantages districts have no role in how teachers are compensated and that all of them are forced to hire these cheap, new teachers just because they have hard working conditions. In other words, many personnel costs, and we kind of go into this a little bit in the weeds in our own report, are the result of choices made by district leaders so they set the salary schedules in many cases, the class sizes, the maximum class sizes.
Some of this is district decision and they also sometimes require that schools have certain number of support staff. In other words, all of this stuff isn't as easy as we think it is. Salaries in some cities are actually higher because you've got these strong labor unions which then can better negotiate for higher pay. Anyway, we go into all these different reasons that I think they're reasons that these things are happening, not just that, oh well, districts are powerless over some of these things happening.
I think the bottom line is we agree with the Center for American Progress that we need better data at the school level but I'm just not sure that that's what they got with OCR.
Mike: This is the main concern I have is the data concern. Does the typical district out there have any idea how to figure out how much money they're actually giving to each of their schools? Our experience is that don't have any idea how much they're giving because they don't budget that way. The question is because of this OCR survey, did they diligently go through and figure this out in a smart way to get to the right answer? I'm not sure that that's what happened.
People look at this and probably believe it because we all say, "Well, it's believable." We all believe it to be true that poor schools are getting less money than rich schools within districts. Marguerite Rosa has found that to be the case in some places. We found in Washington, D.C. and most of the districts around here that was not the case.
Amber: Right.
Mike: That both the D.C. public schools in Montgomery County, Fairfax, that they were all quite progressive in trying to push extra resources into the needy schools. I just want the better data, right? I understand the demand for saying we want the comparable resources. I think the right thing to do in the next re-authorization is to require the better data collection and to spell out what that means and provide some capacity to be able to do it and it's hard, so that we have data that we can actually trust and find out. Find out which districts really are spending inequitably and which ones are not. I think that the findings would look different than what we've got her under OCR.
Amber: The fact that it's coming out of OCR and we've said this for a long time, you know, it just feels like, you just don't know what the ulterior motive is here.
Mike: We know exactly what it is. They put out guidance in the Fall saying they're going to go after districts for spending inequitably.
Amber: All right.
Mike: Try to create those new federal guarantees of equal spending that doesn't actually exist. Look, Paige, you guys have tracked this about how collecting this kind of school level financial information seems to be a growing interest in the financial side on the data front. We've come a long way on student achievement and some other student outcome indicators. Now people want to know more about school spending. It's hard, right?
Paige: It is hard and it's not just the financials. It's resources in general. It's unclear how resources are allocated even separate than money. One of the things that QDC called for in our ESEA recommendations was better public reporting of school spending and financials in district because basically, I mean as you found, it's just hard to know because the data is just not out there. How do you draw any conclusion and what conclusions are we leaving on the table because we just don't have good data.
Mike: When you say resources, you mean even things like professional development, right?
Paige: Right.
Mike: Some district superintendent spends a lot of time in a particular school or ...
Paige: Right, even if it's an outside resource coming in or a service that's being offered, if you're sending teachers to professional development, that's time.
Mike: Yeah.
Amber: It's just really unclear and it's never tied to performance and so it's unclear what the impact is and how we should alter those resources and financials to get to the desired impact.
Mike: Okay, very good. Hey, in the end, that was pretty gentle. I mean, we're basically just giving CAP mostly a hard time for using the OCR data which granted are the only data out there for this sort of thing. We just don't actually believe them.
Amber: Yes, believe them, right.
Mike: Yes, so besides that ...
Amber: Better not to report at all if you can't trust the data, right? That's tough.
Paige: I would say that you've started an important conversation because if the finding at the end of the day as it gins up this concern about we don't have the right data, you said at the end of the day, let's collect better data and I can get onboard with that.
Mike: That's music to Paige's ears.
Paige: Exactly.
Mike: All right. Well, thank you Amber. Thank you Paige. That is all the time we've got for this week. Until next week ...
Paige: I'm Paige Kowalski.
Mike: I'm Mike Petrilli at the Thomas B. Fordham Institute signing off.
This new study by the Center for American Progress (CAP) examines the ESEA comparability requirement, which mandates that school districts provide “comparable” educational services in both high- and low-poverty schools as a condition of receiving Title I dollars. CAP’s concern is that, although this requirement is intended to level the playing field for schools, it actually allows districts to use teacher-to-student ratios or average teacher salaries as a proxy for comparable services, instead of using actual teacher salary expenditures. And because poor schools typically have newer teachers who tend to struggle their first few years and cost less to employ, these schools are getting both less qualified teachers and less money than more advantaged ones.
The analysts examine Office of Civil Rights district spending data for the 2011–12 school year from roughly ninety-five thousand public schools. Adjusting for cost-of-living differences across districts, they compare how districts fund schools that are eligible to receive federal Title I dollars with other schools in their grade span and find “vast disparities” in the allocation of state and local dollars.
Here are the three key findings: one, due to the “loophole” in federal law, more than 4.5 million low-income students attend inequitably funded Title I schools; two, those schools receive around $1,200 less per student than comparison schools in their districts; and three, if the federal loophole were closed, high-poverty schools would receive around $8.5 billion in additional funds each year.
There is, however, a large and insurmountable problem: The data are likely untrustworthy. OCR data are self-reported by districts and aren’t systematically verified. CAP itself admits in a footnote that “it is probable that districts may have filled out these forms using slightly different analytic approaches,” and adds that it was not able to “cross-reference the Civil Rights Data Collection with other school finance datasets.” This means that districts could be all over the map in how they report these data, especially because we know that they tend to budget not using real dollars, but by allocating people and services. A better method would be to use audited expenditures at the school level, use actual rather than average salaries, and file FOIA requests for each employee's title and salary if necessary. This is what we at Fordham did for our Metro D.C. School Spending Explorer, which found most districts (at least in the Washington, D.C. area) to be surprisingly equitable—and even progressive—in their spending.
All of that to say that, yes, in order to sort this out, we need reliable accounting of all expenditures by school level for sundry reasons—not only for comparability purposes. But it is highly doubtful that that’s what we’re getting from OCR.
SOURCE: Robert Hanna, Max Marchitello, and Catherine Brown, “Comparable but Unequal: School Funding Disparities,” Center for American Progress (March 2015).
Teach For America, its coffers fattened with $50 million in federal i3 scale-up grant money, embarked upon a major expansion effort in 2010. It aimed to place 13,500 first- and second-year teachers in fifty-two regions across the country by the 2014–2015 school year—an ambitious 80 percent expansion of its teaching corps in just four years. As part of the deal, TFA contracted with Mathematica Policy Research to evaluate the expansion.
A handful of previous studies have found that TFA teachers have been more effective than conventionally trained and hired teachers in math and about the same in reading. The big question was whether putting its growth on steroids would compromise TFA’s recruitment and selection standards or overall effectiveness.
Mathematica found little reason to be concerned about TFA losing a step. The elementary school teachers recruited in the first and second years of the i3 scale-up were “as effective as other teachers in the same high-poverty schools in teaching both reading and math.” Corps members in lower elementary grades “had a positive, statistically significant effect on student reading achievement,” but no measurable impacts for other subgroups of TFA teachers were found. Of interest (mostly to TFA itself), the study found “some evidence that corps members’ satisfaction with the program declined”—perhaps a hint of growing pains.
Love ‘em or hate ‘em, Teach For America remains the closest thing education reform has to a household brand. Matt DiCarlo of the Shanker Institute summarized reaction to the findings well, calling it “one of those half disturbing, half amusing instances in which the results seemed to confirm the pre-existing beliefs of the beholder.” Just so. If you’re one of TFA’s legion of detractors—who already accuse the organization of being full of smartypantses with fancy degrees from prestigious colleges and a few weeks of training—you can say that the TFAers offered no improvement over comparison teachers with an average of fourteen years experience. If you’re a fan of TFA, your response is to ask where the value lies in education school, traditional certification, and all that experience if it can be matched by smartypants TFAers with fancy degrees from prestigious colleges and a few weeks of training. Another possibility, beyond the scope of Mathematica’s work and too depressing to consider, is that the outcomes in high-poverty schools are so poor that it doesn’t take that much to get up to speed and produce the same sad, desultory results.
Mathematica’s researchers came neither to bury nor praise TFA. The study "provides a snapshot of TFA’s effectiveness at the elementary school level in the second year of the i3 scale- up,” they conclude. TFA’s effectiveness “could either increase or decrease as the program expands further and adapts to its new, larger scale.” Opinions about the program, meanwhile, are unlikely to change.
SOURCE: Melissa A. Clark et al., “Impacts of the Teach For America Investing in Innovation Scale-Up,” Mathematica Policy Research (March 2015).
This clever volume offers a collection of essays about how to improve teacher education. Each of the authors write about the findings of the Choosing to Teach longitudinal study, which involved thirty randomly chosen teachers, ten each from three non-traditional teacher-prep programs: the Urban Teacher Education Program (UTEP) at the University of Chicago, the Alliance for Catholic Education (ACE) at the University of Notre Dame, and the (Jewish) Day School Leadership through Teaching (DeLeT) program at Brandeis University. Following enrollees from the time they entered their respective programs through four years in the classroom, it focuses on how teacher prep programs tailored to participants’ backgrounds and aspirations can improve classroom practices and keep teachers in the classroom longer. For example, teaching in an inner-city Catholic school has a different set of challenges than teaching at an affluent suburban school, and a given teacher’s interests and effectiveness will differ in each.
With this in mind, all three programs provide immersive and contextualized learning opportunities through what the authors call “nested contexts of teaching.” Training looks beyond the classroom to provide a more comprehensive view of the role future teachers will play. ACE, for example, focuses not just on the classroom, but on the larger school, its community, and the Catholic Church. This multi-layered approach ostensibly provides teachers with tools unique to their local environments. That, in turn, “increased their sense of agency as teachers and reinforced the rightness of their career choice.” By the end of their first year of teaching, twenty-two of the thirty participants (nine from UTEP, nine from DeLeT, and four from ACE) reported that they intended to teach for five or more years, despite unsupportive school conditions in some cases.
Too many preparation programs follow a one-size-fits-all model. Inspiring Teaching offers an alternative paradigm—teacher education that prepares educators to serve particular groups of students or specific kinds of schools. Without them, “we risk losing the gifts of bright, socially committed teachers from teaching in areas of great need or from teaching completely.”
SOURCE: Inspiring Teaching: Preparing Teachers to Succeed in Mission-Driven Schools, ed. Sharon Feiman-Nemser, Eran Tamir, and Karen Hammerness (Cambridge: Harvard Education Press, 2014).