Policy Brief: Pathways to Teaching in Ohio
A thorough overview of how teachers are trained and licensed
A thorough overview of how teachers are trained and licensed
Have you ever wondered what it takes to become a teacher in the Buckeye State? Wonder no more! In this policy brief, we outline the entire process—from acceptance to a preparation program all the way to advanced licensure. We even take a look at alternative pathways and out-of-state educators.
As the year winds down, the Fordham Ohio team reflects on a landmark year in the Buckeye State. At times, this year has felt long and arduous, an uphill climb that could prompt even the driest among us to want to spike the holiday eggnog. However, the state’s struggles haven’t been for naught; this year boasts some successes that would put a smile on the face of the Grinchiest of the Grinches. Here’s our take on the three worst and best events in Ohio’s education space in 2015.
Worst #1: Authorizer evaluation fiasco and its aftermath
Legislation in 2012 installed meaningful authorizer performance reviews. After three years of piloting and developing the evaluations, the Ohio Department of Education (ODE) finally launched them and announced the first spate of ratings (including the Thomas B. Fordham Foundation’s exemplary marks). The ratings lasted all of four months. It was discovered that ODE’s school choice director had tossed out scores from online schools as part of authorizers’ academic ratings. That move was illegal, cost him his job, and resulted in all ratings being rescinded. The evaluation was sent back to the drawing board; Fordham weighed in on its hope for the evaluation to be rigorous but realistic.
The fallout has been immense. Among other things, the scandal is partly to blame for the delay in federal Charter School Program dollars—a devastating blow for top-performing charter schools that hoped to expand and replicate in 2016 (many of which have used CSP funds successfully in the past). This is unfortunate, given that the department had previously stepped up its charter school oversight considerably. Even State Auditor Dave Yost told the Dispatch that under HB 2 (see below), Ohio’s charter regulations were sufficient to ensure that the federal grant “can be spent properly and wisely.” The paper’s editorial board echoed his sentiments late last week: “Here’s hoping recent positive signs and leadership appeals can sway the feds to release the money, and that it will indeed be spent wisely.”
Best #1: Passage of landmark charter reform legislation (HB 2)
On October 7, 2015, Ohio lawmakers passed a sweeping overhaul of charter school law, and the state’s charter sector finally embarked on the “road to redemption.” It’s hard to oversell this one as the best event of the year.
Fordham has a long history of advocating for many of the reforms HB 2 ushered in, but if you asked us last year at this time, we would have been hard-pressed to predict just how bold and far-reaching the long-awaited reforms would be. Throughout the year, the road twisted and turned; there were setbacks, questions, and delays. But in the end, the Ohio General Assembly approved and Governor Kasich signed a strong bill that focuses on good governance, accountability, and quality. The reforms hold the promise of dramatically improving educational outcomes for the 120,000 students who attend some four hundred Ohio charter schools. Everyone involved deserves kudos for sticking it out during negotiations, rewrites, amendments, and committees. This particular success indeed has many parents.
As the provisions of HB 2 come to life, the devil will be in the details. Faithful implementation by all players is of paramount importance if the promise of charter reform is to be realized. We look forward to that implementation beginning in 2016.
Worst #2: Watering down of accountability
In the accountability realm, the legislature was active this year. So were we. We typed until our fingers nearly fell off, urging Ohio to stay the course on accountability, remain committed to the Common Core, leave rigorous assessments alone, resist the urge to exempt at-risk kids so that schools might look better on report cards, and not give in to the increasing level of vitriol over standardized testing. We argued that parents and taxpayers deserve to know the truth about whether students are ready for college and beyond. The level of misinformation about Common Core and PARCC assessments was staggering at times. The opt-out movement gathered steam in spite of the well-known detrimental effects on schools and students. Whew!
The consequence of this general drumbeat against accountability? For one, the death of the PARCC exams Ohio. Worse, the state set itself up to seriously stretch the truth about student achievement by using too-low cut scores for student proficiency that overstate the proportion of students who are on track. Ohio needs to battle against the proficiency illusion, not hide behind it. We should be ashamed that we stand out among peer states as one of the few not rising to that challenge.
And even basic student growth measures are under attack as the year ends, with Buckeye charter school groups pursuing a “similar-students” academic measure to complement or possibly even supplant Ohio’s current value-added framework. Ohio has nothing to gain by changing horses in midstream.
Best #2: Fewer federal mandates and a uniquely Ohio path
The reauthorization of ESEA (a.k.a. Every Student Succeeds Act) presents Ohio with an opportunity to go back to the drawing board, rethink its accountability metrics, and design a system that tosses out federal requirements (think: Annual Measurable Objectives) and replaces them with more meaningful measures. It could also result in schools and districts returning to more well-rounded curricula and better serving high-achieving students.
After dumping PARCC—which was, in our opinion, a rigorous test that held tremendous promise (despite its logistical problems)—Ohio moved to develop its own tests with the American Institute for Research (AIR). The new assessments will be shorter, administered less often, and contain items approved by Ohio educators. While less time spent on testing is a good thing, the quality of Ohio’s new assessment will determine whether this is a “best” or “worst” move. At minimum, however, the Buckeye State remains committed to Common Core standards and will create exam designed to align with those standards. We hope this will restore faith among educators that annual testing is worthwhile.
In addition, teacher evaluations are no longer mandated by the feds. Ohio has an opportunity to make them more meaningful while reducing the overall testing load (though we don’t recommend tossing them altogether).
Worst #3: Defense of the status quo
This is a broad category with a number of dubious examples: folks defending the Survivor-like Cincinnati magnet school campouts, school districts “billing” the state for funds that legally follow students to charter schools, and complaints by districts about inter-district open enrollment based mainly on money rather than student success.
But one story rises above all others to claim the ignominious title of “worst defense of the status quo,” and that is the sad saga of Youngstown City Schools. For months, editors of the Vindicator (the city’s venerable newspaper) begged the state to intervene and save the children in the district from a school system that had failed them for decades. The editors went so far as to extract a promise of help from Governor Kasich when he sat down with them in September 2014. All this despite the fact that the district had been under the control of an Academic Distress Commission (ADC) —the state’s highest level of intervention for failing districts—for nearly five years with nary a blip of improvement. What happened when the governor and the legislature put this promise into action (in the form of amendments to HB 70 that significantly sharpened the ADC rules in Ohio)? We refer the gentle reader to the title of this section for the answer.
Community groups sprang up to oppose the imposition of the new Academic Distress Commission; state lawmakers initiated a flurry of activities and legislation designed to override the strongest impacts of the so-called “Youngstown Plan.” Legal action was threatened and initiated on topics concerning the legislative process, the timing of various aspects of the plan, the impaneling of a committee appointee, and even the holding of meetings.
As of this publication, everything is on hold again because the school board’s appointee did not meet the court-mandated definition of “teacher” and the board president plans an appeal. This is proof positive that adult interests are still at the forefront of business “the Youngstown way”.
The defense of the status quo must end in our state in 2016. The clear losers are thousands of school-age children and their families; the “winners” are calcified adult interests (personal reputation, money, and fiefdoms). We at Fordham will continue to urge schools and districts to investigate new ways of bringing high-quality education to all students.
Best #3: Cleveland: The beginnings of a turnaround success story
Cleveland—like many urban districts—has struggled academically. But the city has stood out from the pack this year in ways worth noting. It posted gains on the 2015 NAEP exams, bucking the national downward trend for cities and serving as one of Ohio’s only bright spots on the most recent round of testing. It was named among the best cities for charter-district collaboration by Fordham in a national report, and its designation as one of seven “compact” cities by the Gates Foundation late last year signals broader faith in local education reform efforts.
Though not without its warts, Cleveland has laid out a strong reform plan and has many of the right variables in place for long-term success: buy-in from and engagement with parents and the community, a strong public-private partnership organization (the Cleveland Transformation Alliance), and bold district leadership. Taking a cue from Cleveland, Cincinnati and Columbus have created local public-private partnerships of their own.
Outside of education, Cleveland has earned top spots as a city to build wealth, find good eats, be a baseball fan, drink microbrews, and visit as a tourist. Some would say that Cleveland’s place among the list craze is pretty meaningless; but when human capital is arguably the most important component of schools’ success, being perceived as a cool place to work and live can go a long way toward recruiting and retaining talented teachers and principals.
**
On balance, education policy in Ohio crept forward in 2015. Important course corrections were undertaken in the charter school realm, and recent reforms like Common Core and the Cleveland Plan continue to be implemented at scale. But all was not rosy—a scandal led to loss of public goodwill toward charter schools, and the testing tumult led to another switch in the state assessments. Early indications are that 2016 will be no less challenging. It’s critical that rigorous accountability systems be maintained, even in the face of pressure to weaken them. School funding must be fixed. Teacher policies and practices need a reboot. When the punch bowl is empty and we’re looking clear-eyed at 2016, it’s our sincere hope that we’ll see more of the best and fewer of the worst types of events that defined 2015.
The Ohio Coalition for Quality Education (OCQE) has hit the airwaves in an effort to change the state’s accountability policies. The group claims that Ohio doesn’t take into account differences in student demographics across schools—and is thus unfair to schools educating at-risk pupils. Along with the Electronic Classroom of Tomorrow (ECOT), they are promoting the adoption of a new accountability measure that they believe will solve the problem.
The trouble with their argument is that Ohio policymakers have already implemented a robust measure—value added—that takes into account student demographics. Given what these groups are lobbying for, it is important to review the basics of student achievement, demographics, and school accountability, including value-added measures.
Let’s first keep in mind that the concerns about student demographics and educational outcomes are hardly new. For decades, analysts have recognized the link between demographics and achievement. The famous “Coleman report” from 1966 was among the first studies to show empirically the massive achievement gap between minority and white students. Gaps by race or income status remain clearly evident in today’s NAEP and state-level test data.
These stark results, of course, call into question the use of pure achievement measures (e.g., proficiency rates) for school accountability purposes. As we pointed out in a recent article that we co-authored with the California Charter Schools Association, using achievement measures alone would constitute poor policy making. That’s because it essentially penalizes schools for having higher percentages of disadvantaged students, who almost invariably perform worse. (Breaking that connection between demography and destiny is, of course, what much of the education reform movement is about. Alas, relatively few schools have managed to do it thus far.)
Recognizing that schools deserve credit for the growth students make over time (regardless of their starting points), many states have expanded their accountability systems by adopting a “student growth” or “value-added” measure. (The brand-new federal law, which replaces No Child Left Behind, strongly encourages such an approach.) Fordham has long been a proponent of growth measures such as these: In a joint publication in 2006, we recommended that Ohio fully implement a value-added accountability measure for all public schools, including charters. We published a “primer” on the state’s value-added system in 2008 and have used schools’ value-added ratings in our annual analyses of report cards.
So what is the purpose of value added? How does it work, and how does it address concerns about demographics?
Value added gets at the central issue of school effectiveness. The question at the heart of any school evaluation ought to be: Is the school having a positive impact on student learning? Aside from randomized experiments (which aren’t feasible at scale), value-added methodologies remain the most rigorous empirical method for gauging the effectiveness of a school. Their statistical methods aim to isolate the unique contribution of a school: As the Council of Chief State School Officers states,
The main purpose of value-added measures is to separate the effects of nonschool-related factors (such as family, peer, and individual influence) from a school’s performance at any point in time so that student performance can be attributed appropriately.
Value added is premised upon individual student learning gains tracked over time. Statistical analyses leverage student-level data to track students’ achievement gains over time.[1] These gains are premised upon students’ prior test results. By using students’ prior achievement—as Ohio’s value-added method does—the effect of student demographics is almost entirely accounted for, even without explicitly using these variables as controls (e.g., race or ethnicity, income status, and so forth). In other words, the consistent effect of demographics throughout a child’s learning experience is baked in, via his or her own past test results.
Value added sets a standard expectation of growth for all schools. Because value added is premised on student growth, state policy makers can set a common growth standard for all schools with value-added data. As defined in Ohio law, the expectation for schools is that their students make a year’s worth of growth in a year of time. If a school has mainly high-achieving students, they are expected to contribute a year’s worth of learning; the same goes for a school with primarily disadvantaged students. Critically, this ensures a consistent standard across schools—and as such, value added isn’t lowering the growth expectations for our most disadvantaged children. The technical documentation on Ohio’s value-added measure makes the following point: “Through this approach, Ohio avoids the problem of building a system that creates differential expectations for groups of students based on their backgrounds.”
Value added doesn’t correlate with demographics. Almost no correlation exists between economic disadvantage and value-added results. Chart 1 shows the link between value-added index scores in Ohio and economic disadvantage at a school level. The correlation is very low (-0.19), indicating that schools with high percentages of low-income students are not receiving lower value-added ratings. In other words, unlike pure achievement measures, low-income schools are not systematically penalized under the value-added measure. Similar correlations exist in the 2012–13 data; see here and here.
Chart 1: Relationship between value-added results and percent economically disadvantaged, Ohio schools, 2013–14
[[{"fid":"115258","view_mode":"default","fields":{"format":"default"},"type":"media","link_text":null,"attributes":{"style":"height: 367px; width: 600px;","class":"media-element file-default"}}]]
Source: Ohio Department of Education Notes: The value-added index score is a school’s average estimated gain (in NCE units) divided by the standard error. The index score is used to determine a school’s A–F value-added rating. An index score of at least 2.0 is the threshold for an A letter grade. “Economically disadvantaged” generally refers to students eligible for free and reduced-price lunch (185 percent of federal poverty level). The correlation on school-level mobility rates and value-added index scores is also virtually nonexistent (-0.15).
In fact, it’s worth emphasizing that quite a few high-poverty schools are making an exceptional impact on student achievement. Table 1 lists the top ten high-poverty schools in Ohio on the value-added metric; several of these high-flying schools are charters. These examples are proof that all schools, regardless of their demographics, can earn stellar marks on the value-added dimension of state report cards.
Table 1: Top value-added scores among high-poverty schools (90 percent economically disadvantaged or more), Ohio schools, 2013–14
[[{"fid":"115259","view_mode":"default","fields":{"format":"default"},"type":"media","link_text":null,"attributes":{"style":"height: 201px; width: 400px;","class":"media-element file-default"}}]]
There’s no reason Ohio’s e-schools couldn’t be on this list. Yes, they tend to serve disadvantaged populations, but as long as they help their students make a lot of progress from one year to the next, they can earn an honors grade for value added. But they don’t.
It’s important to keep in mind that value-added methods have limitations. They are constrained by the amount and precision of the data collected (a problem for any rigorous accountability measure). And in Ohio, value added doesn’t cover as many grades or subjects as we’d like: Presently, value added covers only grades 4–8—there are plans to extend it into high school once end-of-course exams are phased in—and to date, there haven’t been publicly available value-added data in science and social studies. Meanwhile, the broader research community continues to discuss how value added should be properly used and understood, particularly when applied to teacher evaluations. For examples of thoughtful dialogue on value added from leading scholars, see here, here, or here.
The point is this: If demographics are truly the driving concern for an advocacy group like OCQE, they’re barking up the wrong tree. Value added carefully controls for the influence of demographics by premising gains on a student’s prior test scores—the student serves as her own control—and as a result, schools are placed on a more even playing field for accountability. Ohio policymakers should further incorporate value added into the school accountability system, refine it where necessary, and de-emphasize (though not abandon) achievement-based metrics.
All this raises some tough questions for OCQE and ECOT: Given Ohio’s value-added measure, why are they lobbying for a new accountability measure under the guise of fairness? Is their true concern the welfare of disadvantaged students, or are they just searching for a measure that produces results more to their liking?
[1] Ohio, like Pennsylvania, North Carolina, and Tennessee, employs statisticians at SAS to conduct value-added analyses. A number of other states use “student growth percentiles,” a somewhat similar way of measuring growth.
Ohio has exemplary charter schools – beacons of quality that are helping students reach their full potential. Who are these high flyers and what can we learn from them? How can Ohio replicate, expand, and support great charters in every part of the state?
Fordham partnered with Steve Farkas and Ann Duffet of the FDR Group to survey the leaders of these exemplary schools to capture their thoughts on charter policy, hear what makes their schools tick, and learn what we can do to make sure that good schools flourish and expand.
Quality in Adversity: Lessons from Ohio’s best charter schools will be released on Wednesday, January 27, 2016. You are invited to join us as we discuss the findings and recommendations arising from this survey. A fitting way to celebrate National School Choice Week!
Wednesday, January 27, 2016
Coffee and pastries will be available
Program begins at 8:30 am
Program concludes at 9:45 am
LOCATION:
Chase Tower
100 East Broad Street - Sixth Floor Conference Room B
Columbus, OH 43215
Space is limited. Register today by clicking here.
NOTE: Chad Aldis addressed the Ohio Board of Education in Columbus this afternoon. These are his written remarks in full.
Thank you, President Gunlock and state board members, for allowing me to offer public comment today.
My name is Chad Aldis. I am the vice president for Ohio policy and advocacy at the Thomas B. Fordham Institute, an education-oriented nonprofit focused on research, analysis, and policy advocacy with offices in Columbus, Dayton, and Washington, D.C. I testified to you in September urging the state to quickly and thoroughly implement the charter school provisions contained in HB 2. I also emphasized during my testimony the importance of moving quickly to get the sponsor performance review (SPR)—which was required by legislation passed in 2012, but took three years to develop and pilot—back on track. The success of Ohio’s recent reforms lie heavily on the SPR, so the department deserves tremendous credit for installing an independent panel to review the SPR and draft recommendations quickly. It is a strong sign that the department is serious about implementation and sponsor quality.
We are pleased to say that we agree with many of the recommendations and commend the panel for its thorough review. However, there are a few items in the panel’s preliminary recommendations that caught our attention and merit additional consideration. I’ll briefly highlight the key issues that we noticed. For more detail, please refer to the comments formally submitted by Fordham’s vice president for sponsorship and Dayton initiatives, Kathryn Mullen Upton.
First, per the panel’s ninth recommendation, all applicable report card measures would be considered in a sponsor’s academic rating (one-third of the overall evaluation). The proposed use and weighting of all report card components concerns us greatly. We strongly recommend that the department limit itself to using Performance Index and Value-Added measures—both of which are used in Ohio law to identify “high-quality” and “low-quality” charter schools already—and not rush into using an overall rating formula that hasn’t yet appeared on school report cards. We also urge ODE to weight value added more heavily when analyzing the performance of a sponsor’s portfolio, as it more accurately reflects the academic contributions of schools regardless of their student population—a critical feature, given that charters typically serve students from our most disadvantaged communities. Without this change, it’s highly unlikely that any sponsor would achieve more than two out of four points on the academic component.
Second, we recommend that you reconsider the scoring requirement for the compliance section. While all sponsors should be monitoring all laws and regulations, 100 percent compliance may be impossible to reach and should not be the requirement to gain maximum points in that category. Consider the 100 percent proficiency goal in No Child Left Behind: As it became clear that virtually no school would achieve that target, the benchmark soon became irrelevant and was treated with contempt. By setting the compliance goal at 100 percent, a similar threat exists. A high rating on the compliance component should be rigorous but still attainable.
Third, it’s important to understand the policy ramifications of moving forward with the panel’s proposed scoring rubric. The legislature put in place incentives for exemplary sponsors in HB 2, but as drafted, we believe no sponsor could achieve exemplary status. We are not suggesting that the evaluation system be weakened—it must be a rigorous and transparent process; however, its design and potential consequences must be carefully considered. If the highest rating is not achievable, then the incentive structure set up for sponsors simply won’t work. Just as it doesn’t make good policy sense to design a teacher evaluation system where no teacher in the entire state earns a highly effective rating, or an accountability framework where not a single school gets an A, the same is true for our sponsor evaluation system. Of note, we do support the panel’s recommendation to increase the performance threshold in future years to attain an exemplary rating.
Sponsors are the linchpin for overall charter sector quality in Ohio. The department must hold them accountable for the performance of the schools in their portfolio, their compliance with laws and regulations, and their adherence to high-quality practices. It is absolutely essential to get the sponsor evaluation system right; the long-term success of the sector depends on it. Thank you for the opportunity to provide our feedback. I welcome your questions.
In its 2015 state policy analysis, the National Association of Charter School Authorizers (NACSA) found that fourteen states (including Ohio) saw positive charter policy changes since its inaugural report last year. These wide-ranging improvements demonstrate the value of sizing up a state’s legal framework, diagnosing its structural problems, comparing it to peers, and using that information to press policymakers for change. In other words, rankings like this—and other seemingly wonky law and policy reviews—may actually pave the way for real improvements.
NACSA analyzed and ranked every state with a charter law (forty-three, plus the District of Columbia) against eight policy recommendations meant to ensure a baseline of authorizer quality and charter school accountability: 1) Can schools select from at least two authorizers? 2) Does the state require authorizers to meet endorsed standards (like NACSA’s)? 3) Does the state evaluate its authorizers? 4) Do poor authorizers face sanctions? 5) Do authorizers publish annual performance reports on schools? 6) Is every charter bound by a contract that outlines performance expectations? 7) Are there strong non-renewal standards, and can authorizers effectively close poor performers? 8) Does the state have an automatic closure law on the books?
Additionally, the report offers four state case studies, outlining the challenge to quality in each (e.g., Indiana’s multiple authorizers and authorizer “shopping” problem) and the policy fixes (e.g., consequences for poor authorizing practices).
Ohio’s rise in the rankings to third (and a near-perfect score) is notable, resulting in huge part from the state’s charter school reform bill (HB 2). Indiana and Nevada tie for number one, while Kansas, Virginia, and Maryland are at the bottom of the barrel. Washington earns an unfortunate non-rating in the wake of its state supreme court ruling that its charter school law is unconstitutional.
The report rightly points out that the policy recommendations forming the backbone of the report represent “cornerstones of excellence” but cannot determine quality on their own. State policy is just one part of the quality equation, and NACSA notes that “authorizers often develop practices that work around weaknesses or vagaries in state law.” Implementation is paramount; nowhere is that more true than in Fordham’s home state. Ohio earns full points for requiring authorizers to submit annual performance reports, but authorizers themselves determine the rubric for scoring their schools, resulting in zero comparability and wildly different definitions of quality. Ohio’s closure law is among the oldest in the nation, yet it hasn’t accomplished much and is currently on pause. The state put a meaningful authorizer evaluation into law, but the first round of ratings has been rescinded. Current recommendations on the updated evaluation, if put in place, are so onerous that likely no authorizer would earn high marks. Still, to the extent that states can pull certain policy levers and set a minimum framework for quality, they should. The 2015 analysis gives reason to hope that many already are.
Source: “On the Road to Better Accessibility, Autonomy, and Accountability: State Policy Analysis 2015,” National Association of Charter School Authorizers (November 2015)