Youngstown’s academic improvement plan doesn’t promise much improvement
Since 2005, Ohio has intervened in chronically underperforming school districts by establishing new leadership in the form of an
Since 2005, Ohio has intervened in chronically underperforming school districts by establishing new leadership in the form of an
Since 2005, Ohio has intervened in chronically underperforming school districts by establishing new leadership in the form of an Academic Distress Commission (ADC). In 2015, state lawmakers significantly strengthened the ADC law by lessening the power of the local school board and empowering a CEO with managerial and operational authority. These changes caused an uproar, and the education establishment has been trying to abolish ADCs ever since.
The recent state budget took steps in that direction by creating an off-ramp for the three districts currently under ADC oversight: Youngstown, Lorain, and East Cleveland. The law removes the CEO, returns power to the local school board, and charges the board with developing an academic improvement plan containing annual and overall improvement benchmarks. Districts will implement these plans for three years, beginning with 2022–23, and submit annual reports on their progress. If they meet a majority of their academic benchmarks by June 2025, their ADC will be dissolved. If they don’t, they can apply for up to two one-year extensions. Districts that haven’t improved after a total of five years will revert back to full ADC control.
All three ADC-managed districts have already submitted their academic improvement plans. And while the state superintendent gets the final say in approving these plans, advocates and stakeholders should pay close attention to the process. That’s especially true in Youngstown, the first district to be placed under ADC oversight.
Unfortunately, the plan put forth by the Youngstown City School District (YCSD) leaves a lot to be desired. The district has proposed a total of twenty-four academic benchmarks: six for English language arts, seven for math, two for science, two for history, four tied to graduation, and a benchmark each for overall performance index, chronic absenteeism, and the expansion of career and technical education (CTE). That’s a lot of benchmarks. And while there are some key data points missing—preparedness measures like ACT and SAT scores, AP or College Credit Plus enrollment, college acceptance, and industry recognized credential attainment are all notably absent—the district seems to have covered most of the bases.
But it’s how they covered those bases that’s the problem. Only seven of the twenty-four benchmarks put forth in the plan are measured by student performance on state exams. That means less than a third of the district’s benchmarks are tied to criterion-referenced assessments that are aligned to state standards, easily comparable to other districts, and disaggregated by subgroups like race and ethnicity, family income, and special education status. In math, for example, YCSD has identified just three benchmarks measured by state tests. They read as follows:
The first thing that stands out is the extremely low percentage of students that YCSD expects to reach proficiency. After three years of implementing an academic improvement plan, the district still expects that less than a fifth of its students will be proficient in math. Even taking into account the substantial learning loss caused by the pandemic, these are troublingly low numbers. So low, in fact, that even if the district meets each of the goals listed above, it still won’t have reached its own proficiency rates from 2018–19. To make matters worse, YCSD—by its own explicit admission in the plan—has pre-pandemic proficiency rates that are far below the state average and the averages of similar districts.
It’s also worth noting that these benchmarks will be measured by an average that covers multiple grade levels. The district isn’t promising that 16 percent of third graders will be able to do math proficiently. It’s promising that an average of kids in third, fourth, and fifth grade will be proficient. Averaging progress in this way makes it possible to mask poor performance in certain grade levels. For instance, a banner year in third grade math could cover up stagnant or falling progress in fourth and fifth grade math. Given that the entire point of these plans is to track progress, writing benchmarks in a way that could mask it is definitely troublesome.
Meanwhile, in English, the only state assessment that’s included in a benchmark is third grade reading, where the district has set a goal for 46 percent of third graders to score proficient or above on the state test by 2025. There are no benchmarks for state tests in ELA in grades four through eight despite these exams being a huge part of state report cards and federal accountability requirements. The English II end of course (EOC) exam—which is one of two tests that make up Ohio’s primary path to a high school diploma—is also missing, even though proficiency goals for EOCs in math and history are included elsewhere. The upshot? The district has no proficiency benchmarks in ELA for any grade with the exception of third, and it offers no explanation for why math proficiency was important enough to include but not reading.
It’s important to remember that for an ADC to be dissolved, a district must meet “at least a majority” of its academic improvement benchmarks. State exams only account for seven benchmarks, which means YCSD could fail to meet every single one of these low goals and still be labeled as improving if it meets most of its remaining benchmarks. A good chunk of the district’s remaining goals—eight, to be exact—are measured by NWEA MAP, a computer adaptive assessment used in schools across the nation. MAP has the state’s stamp of approval, and for good reason—it’s a test that that measures growth over time, and helping students grow is exactly what YCSD needs to do. Unfortunately, the district once again sets the bar way too low. Consider the following MAP reading benchmark:
By June 2025, 64 percent of scholars in grades 6–10 will show command of vocabulary, background knowledge, and reading comprehension skills by meeting their individual expected growth goal as measured by the NWEA MAP Reading (informational text, literary text, and vocabulary assessment.
At first glance, this seems like a decent goal. Sixty-four percent is a majority, and it’s also slightly higher than the 63 percent of YCSD students who met their expected growth target between fall and spring in 2018–19. But a closer look reveals a significant problem. Although MAP assessments and data allow teachers and leaders to track individual student growth over time, they don’t allow for comparisons between specific schools, districts, or state averages. YCSD could show incremental improvement over time using MAP data, and that would be worth celebrating. Improvement is improvement. But context matters too. If YCSD’s incremental improvements are miniscule compared to growth in other schools, especially those that serve similar populations, that’s important too. By zeroing in on individual student growth on a third-party assessment, YCSD has eliminated the opportunity to compare their progress—or lack thereof—to other schools. What’s particularly suspicious about this move is that YCSD doesn’t need MAP in order to track growth. All Ohio schools have access to comparable student growth measures thanks to value-added and state report cards. Why, then, is the district opting to use something totally different?
To be fair, there are some aspects of Youngstown’s plan that are worthy of praise. For instance, the district plans to begin tracking the number of students in each high school grade level who are on track to graduate as defined by the number of credits they’ve earned. By 2024–25, YCSD intends to have 82 percent of students exiting each grade on track to graduate. That’s not a high bar, and YCSD shouldn’t get a ton of credit for monitoring how many students are on track since state law requires it. But it’s a good sign that Youngstown opted to include this as a benchmark that will be used to gauge the district’s progress.
The same is true for the final benchmark in the plan, which aims to expand CTE offerings into middle school. It sets a goal that 100 percent of seventh and eighth graders will earn fifteen microcredentials. It’s unclear which credentials the plan is referring to—they are identified as “the fifteen professional microcredentials defined by Ohio Means Jobs” but there are no links provided and a simple search didn’t turn up much—or how rigorous the process is for earning them. Expanding CTE into middle school is gaining steam nationally, though, and a goal of 100 percent is certainly rigorous enough.
But these are just two of twenty-four benchmarks proposed in YCSD’s plan. The vast majority of the rest contain goals that are so low it’s appalling. Given how adamant the education establishment in Youngstown has been about wanting its ADC abolished, it’s in their best interest to propose benchmarks that are easy to achieve. As they say, it’s better to under-promise and over-deliver instead of doing the opposite. But Youngstown’s plan seems to be to under-promise and under-deliver. The children of Youngstown deserve much better.
Regardless of whether you believe that too much is being asked of our schools and our educators these days, it is always worth asking whether they are capable of doing what is asked at any given time. Success in new initiatives is often spotty and dependent upon quality implementation, even though it can seem simple for policymakers to entrust or entreat schools to execute any number of schemes for the benefit of young people compelled to attend them. A recent report on computer science education provides an excellent example of how schools might seem like the right vehicle by which to reach young people, but a cursory look shows that that is simply not correct.
Amazon’s Future Engineer offshoot surveyed thousands of public and private school students across the country earlier this year. The findings revealed that young people had high interest in the field—for both general learning and possible career exploration—but often lacked access to computer science classes in their schools. Recommendations, such as they were, focused solely on traditional schools and classroom teachers as the means to fix this disconnect. That raised an error code for this reader. Here’s why.
The Future Engineer report notes very starkly how low-income, rural, and non-White youth are less likely to have access to computer science courses in their schools. It is a common finding outside technology, too. Underserved students are so termed for a reason: They get less of everything, and much of what they do get is of lower quality than their peers. Unfortunately, traditional school and district infrastructure reinforce these inequities. The College Board has made some strides in widening access to Advanced Placement computer science courses for traditionally-underserved students, but AP courses are still less available to those students generally. We have little evidence to suggest that Amazon’s effort to “inspire and educate hundreds of thousands of students from underrepresented and underserved communities each year to try computer science and coding” will fare any differently, despite millions of dollars being poured into it. Even if real-world improvements can be made for some students, the same general haves and have-nots division that already exists will likely continue.
There is also great value in giving students knowledge of role models in various fields of education and work, a sentiment reiterated by the Future Engineer authors. But are traditional classroom teachers really the right type of influencers in this case? Surely lionization of the brilliant women of science—actual physicists and mathematicians—finally given their public due via “Hidden Figures” would resonate more with young Black students than would the exhortations of their computer science teachers. Not to mention Neil DeGrasse Tyson, Victoria Chávez, and Sunita Williams who are practicing in their fields more visibly than ever. And what about social media influencers and white hat hackers? Actual science teachers have reported difficulty in properly integrating hard STEM work into their lessons, and there is a dearth of dedicated computer science teachers going back several years. Local colleges, world-wide MOOCs, NASA programs, and non-profit organizations are all doing great work to connect their experts with kids for insight and inspiration. More, please!
One positive of the Covid-19 pandemic is the long-delayed realization that the internet-access gap is real and devastating, especially for the future prospects of rural and low-income students. How many students, one wonders ironically, had to respond to the Future Engineer survey via their phones or a limited-use tablet rather than a fully-functional Wi-Fi-enabled device? Surely such students would have played down their interest in computer science out of sheer necessity. Billions of dollars are being spent to close the gap, which will hopefully provide young people everywhere with equitable access to a gamut of educational resources. These efforts have a special resonance for computer science, in particular, as free and hugely popular online resources, DIY courses, non-profit organizations, and gaming platforms should soon be accessible to many more youth. Role models and teachers are great, but getting there yourself is just as valid.
And speaking of the pandemic, although we have heard that working in a fully-virtual milieu was a negative experience for many teachers, a small but mighty subset of young people have been reporting satisfaction, experiencing growth, and expressing continued interest in virtual learning. Attending online school is not the same as Python coding or doing cybersecurity work, of course. But spending every day working on a computer when you had never done so before could plausibly change a student’s attitude toward computer science jobs, just as plumbing apprenticeships or aviation training for high school students are intended to do.
The upshot? Schools are squares, man. They can provide great opportunities for their students, but quality varies widely and the underserved students who need those opportunities the most often don’t get them. Even with some excellent innovation in career and technical education, our quaint education structure is not known for setting trends in academic areas or employment futures. Focusing on schools to the exclusion of other, more-modern avenues seems far too passive, chance-y, and status quo–centric, especially for a tech giant looking to build up the next generation of computer engineers. It simply does not compute.
In mid-October, the Ohio Department of Education (ODE) released report card data for the 2020–21 school year. Due to pandemic-era provisions passed earlier this year, no school ratings were available—only raw data. Ohio will return to standard protocol and issue ratings next fall, but even without them, there are important things to highlight. Here’s what stood out to me.
Overall, test participation statewide was surprisingly strong. One of the concerns going into the spring 2021 assessments was that large numbers of students wouldn’t come in for testing, thus reducing the value of the data. But as it turns out, test participation rates were remarkably high given the circumstances. Statewide, 94 percent of students took their state tests, a number that is only modestly below the usual participation rate (99 percent in spring 2019). Hats off to Ohio schools and parents that helped ensure assessment continued in a relatively normal fashion.
Nonetheless, some schools had more untested students, which directly affected their performance index scores. A handful of districts and charter schools—likely those that persisted in remote learning longest—had low test participation rates. That directly affected their performance index (PI) scores, a composite measure of achievement across tested grades and subjects. Under longstanding Ohio policy, schools receive zeros in their PI calculations when students do not participate in state testing—a provision that encourages test participation. In a year when test-taking was more difficult than usual, the rule significantly depressed the PI scores of schools with large numbers of untested students. The table below illustrates the impact using Cleveland school district’s data. Since tested students score at least at the “limited” level, the district’s score would have been higher had all students taken the assessments. In sum, we must look at schools’ 2020–21 PI scores with greater care because, in some cases, they’re not strictly measures of achievement but also reflections of test participation. That said, some schools could have tried harder to test students, even those who spent most of the year learning at home.
Table 1: Performance index calculations for Cleveland school district
Thousands of students are falling well short of grade-level standards. In September, Ohio State University professors Vlad Kogan and Stéphane Lavertu published an in-depth analysis of the spring 2021 test data. They found that the pandemic and shift to remote learning took a tremendous toll on achievement, with the average student losing a third to full year of learning depending on grade and subject. Those results are alarming enough, but putting it that way almost surely understates the academic emergency. Another way of parsing the data is to see the massive numbers of students scoring at the limited achievement level—the state’s lowest mark—which indicates that they have “minimal command” of grade-level standards. Statewide, figure 1 shows that roughly one in four Ohio students are struggling with basic math and reading skills. In the state’s largest urban districts, the percentages soar to one-half to two-thirds. As we now know, 2020–21 was a disaster for most students—urban children, in particular—and the state and localities have much work ahead to get kids back on track.
Figure 1: Percentage of students scoring “limited” on state tests, 2018–19 and 2020–21
Note: The 2020–21 data represent the fraction of tested students scoring limited both statewide and in each district.
Where’s the value-added data? Growth data, a.k.a. value-added, offers an important picture of student progress over time. Relying on a “gap year” approach that looks at growth between spring 2019 and spring 2021, value-added results were calculated largely in a normal fashion. Oddly, however, the Ohio Department of Education (ODE) only released schools’ value-added results for individual grades and subjects—district- or school-wide composite results were M.I.A. That’s a disappointing omission, as the composite scores could have helped analysts and policymakers identify schools that most effectively met the Covid-19 challenge. What practices did they employ? Were there any standouts among those that relied more heavily remote learning? It’s all the more surprising to see the composite scores omitted given a legislative directive that, while waiving ratings, also declared that ODE “shall report any data that it has regarding the performance of school districts and buildings for the 2020–2021 school year.” While a good lawyer could probably defend the omission, it’s unfortunate that this critical information was withheld from the public.
Graduation rates continue to tick upward. Not all report card measures declined as graduation rates continued their march upwards. Statewide, the four-year graduation rate for the class of 2020 rose to 87 percent, an uptick from the 86 percent rate for the class of 2019 and well above the 78 percent rate posted by the class of 2010 (see figure 2). The class of 2020 was the first graduating cohort affected by Covid-19, as the last quarter of their senior year was disrupted when schools closed that March. In an emergency measure, lawmakers gave local schools wide latitude to award diplomas in cases when seniors had not yet met graduation requirements. It’s hard to say how many students in the class of 2020 this applied to, and hopefully the state will release more information on that count. But there are reasons to worry that at least some didn’t meet graduation standards—and waving them through without the requisite knowledge and skills for adult success is a disservice.
Figure 2: Ohio’s four-year graduation rates, classes of 2010 to 2020
Progress on post-secondary readiness metrics has stalled. With graduation rates zooming towards 90 percent—becoming almost a participation trophy—it’s more important to examine other indicators of post-secondary readiness. Two indicators shed light on that front. One is the percentage of students who achieve a college remediation-free score on the ACT or SAT. Ever since Ohio started reporting this data point for the class of 2014, only about one in four students have met this target. As figure 3 shows, remediation-free rates slid in the two most recent years, with just 24 percent of the class of 2020 meeting the benchmark. The class of 2020 would have taken college exams in their junior year, so the data shouldn’t be significantly impacted by the pandemic.
Figure 3: Percentage of students meeting college remediation-free standards on the ACT or SAT
Note: The percentages include all students in a graduating class in the denominator, regardless of whether they took the ACT or SAT.
Another indicator of readiness is the percentage of students who earn industry-recognized credentials. Those rates have always been fairly low—less than one in ten Ohio students achieve this goal—but it’s worth noting that, after several years of progress on this measure, the industry credentials rate fell for the class of 2020. This measure, however, may have been affected by Covid-19 if students weren’t able to complete a credential in the last quarter of their senior year.
Figure 4: Percentage of students earning industry credentials
* * *
The state’s report cards always give us reason to pause and reflect on where students stand and what kind of progress Ohio is making educationally. Though less comprehensive than usual, the 2020–21 iteration is no different. As many expected, the news is mostly grim. The pandemic has indeed knocked many students off track. The good news, however, is that we now have a baseline for setting recovery goals and tracking our progress moving forward.
The most commonly expressed motivator for school districts to adopt a four-day school week is monetary: lowering expenditures on hourly staff, transportation, and utilities costs. It is not incidental that the most recent uptick in districts opting for them was in the aftermath of the Great Recession. The second motivator is as a low-cost perk for hiring and retaining teachers. Kids? Well, they are somewhere further down on the list. A new report from the RAND Corporation is one of the largest and most detailed to date examining the implementation and outcomes of four-day school weeks, focusing strongly on how those motivators play out in practice.
This study employed both qualitative and quantitative analyses, examining existing data from six states (Colorado, Idaho, Missouri, New Mexico, Oklahoma, and South Dakota) and original data collected explicitly for this study in three of those states—Idaho, New Mexico, Oklahoma—chosen due to their high numbers of districts utilizing four-day weeks. The analysis was conducted using a sample of eighteen four-day districts (six from each of the three focus states) and eighteen districts on a standard five-day week (six comparable districts from each state). The original data included in-person interviews with more than 460 students, parents, teachers, principals, and other stakeholders in twelve four-day school week districts, as well as online survey data from over 6,000 middle and high school students and more than 1,200 parents of elementary school students in both types of districts.
The descriptive detail matches that of previous research. Four-day week districts typically have, on average, about fifty minutes of additional daily instruction time than do five-day week districts. This is largely driven by the need for districts to meet required minimum hours of instruction per year set by their states. However, four-day week districts ultimately average fifty-eight fewer instructional hours per year than do their peers with five-day weeks. The most typical schedule consists of classes on Monday through Thursday with Fridays off, followed by classes Tuesday through Friday with Mondays off, and a schedule which switches up the off-day at various times of the year.
Most four-day week districts reported holding sports practices or competitions on the off day; far fewer reported clubs or student activities that met on the off-day. Academic enrichment activities were offered on rare occasions. Despite sports and club offerings being common, 80 percent of high school students and 90 percent of elementary students surveyed reported that home was the primary place they spent their time on the off-day. Typical non-school activities for students on the off-day include recreational sports, employment, and spending time with family and friends.
A majority of teachers reported doing some amount of school-related work on the off-day, although focus groups indicated that this was largely intended to open the weekend fully for non-work activities. A non-trivial number of teachers reported working a second job on the off-day, including substituting in a nearby five-day week district, with the highest concentration of such reports occurring in Oklahoma. Parental survey responses indicated that the popularity and continued existence of four-day school weeks could be influenced by how easy it was to find an adult to watch children as needed on the off-day. Thus, communities with multiple generations living in proximity or communities with a number of flex-time employers would likely be more supportive of a shortened formal school schedule.
As for the outcomes, empirical data suggest that cost savings to districts from a shortened school week are less than 5 percent per year as compared to similar districts with a traditional schedule. A number of school leaders interviewed claimed far more extensive savings than would seem possible, but a majority said that any savings were meaningful, especially if the four-day week allowed them to retain and recruit staff. Board members, administrators, school staff, and principals were generally in agreement that four-day weeks were an incentive to teachers. However, teacher responses were decidedly mixed, and context seemed to matter. For instance, a number of districts switched back and forth from four-day to five-day weeks several times, but teachers reported staying put throughout the switches. As a result, they were considered by the researchers as not influenced by the “perk” of a shorter week. Teachers in areas where a lot of districts operated with a shortened week were similarly immune. It seems an extra day off isn’t considered a perk if everyone gets it. Meanwhile, in areas where the shortened week was a rarity, more teachers reported seeking jobs and remaining employed in their four-day week districts due to the perceived benefit.
Teacher attendance appears to have greatly improved after the adoption of a four-day week. Teachers across the board reported scheduling medical appointments and family errands on the off-day rather than taking a work day to attend to them. Additionally, having “cleared the decks” on the off-day, weekends were reportedly more restful for teachers, leading to less reported fatigue or burnout. Their survey responses indicated high satisfaction with the four-day week. Unfortunately, no quantitative data on teacher absenteeism were available to corroborate these survey responses.
Students and parents also reported strong satisfaction with the four-day school week based on perceptions of increased sleep time, positivity toward school, and general well-being for children. Quantitatively, however, there were no statistical differences between four-day week districts and their five-day peers in terms of student attendance. Survey responses between the two groups of districts also showed no statistical differences on perceptions of student emotional well-being, parent stress levels, or school climate.
There are many more dimensions explored in the RAND report, including a reminder that previous research has found both negative and positive academic effects of reduced in-class learning time in four-day week districts. Without new quantitative data included, the perceptions of adults are the main contribution of this report to the literature. The happiness of adults who believe they are saving precious time and money, even when they are probably not, still seems to be the driving force for embracing four-day school weeks. When we shift our focus away from perception and toward hard evidence, that shiny picture of the four-day week model begins to lose definition. And when we shift our priorities toward actual student learning and achievement—if we can still manage to do so—the image is more noise than signal.
SOURCE: M. Rebecca Kilburn, et. al., “Does Four Equal Five? Implementation and Outcomes of the Four-Day School Week,” RAND Corporation (October 2021).
Results of a recent survey published by Amazon’s Future Engineer offshoot show several disconnects between the interests, experiences, and aspirations of U.S. students in regard to computer science. While many of the jobs that await today’s middle and high school students will likely be even more technology-focused than they are now, more specifics are required—beyond those illuminated here—to make sure that high-quality education and training is in place to leverage legitimate interest from young people and to properly connect them with the work they will ultimately undertake.
The survey was conducted electronically by Gallup in June 2021, with a total nationwide sample of 4,116 public and private school students. Just over 1,800 were high schoolers, the rest middle schoolers. Weighting adjustments were made to closer match the sample to national demographics of gender, grade, race/ethnicity, and school type per the U.S. Census Bureau’s American Community Survey for 2019. No breakout of responses between public and private school students was included.
Interest in computer science was high among survey respondents, with 62 percent saying they would like to learn about the topic. That included 53 percent of female students and 72 percent of male students. Among Black students, however, females were slightly more likely to say they are interested in learning about computer science; and Black female students are more likely than White or Hispanic females to report interest (61, 51, and 52 percent, respectively). Overall, female students were significantly less likely than males to say they planned to study computer science in college and would someday like to have a job in a related field.
In terms of access, 70 percent of respondents reported that computer science courses were offered at their schools; however, just 49 percent had actually taken one. Low-income students living in rural areas were least likely to report course availability in their schools. In large cities, where computer science classes are presumably more common, only 67 percent of Black students said their schools offered them, versus 81 percent of Hispanic students and 88 percent of White students. Access to school-based courses also predicted interest in the topic: Among students who said computer science classes were offered, 68 percent said they were interested in learning about the topic versus 49 percent of those whose schools do not offer such courses. Same goes for sustaining interest: Among students who reported no access to computer science classes, interest in the topic falls from 63 percent at fifth grade to 23 percent at twelfth grade; where classes are available, reported interest still falls, but from 85 percent to 59 percent.
Finally, having adult role models was strongly linked to students’ computer science career plans. Just over half of students reported having a role model in the field, although no definition of the term was provided by surveyors nor did students identify their role models. Responses were lower for female students (49 percent) and Black students (45 percent). Students in urban areas were twice as likely to report having a computer science role model than were their rural peers, which the analysts connect to a similar gap in access to classes, which likely means that students are generally thinking of classroom teachers when referring to a “role model in the field.” More than 60 percent of students who reported taking a computer science class at school said they have role models in the field versus just 45 percent of those who reported not taking a class.
The report makes no recommendations other than to boost the number of computer science classes in schools and the number of adult role models for students. This is okay as far as it goes, but focusing on teachers as high-tech role models seems less than ideal, and not thinking beyond the classroom walls is troublingly status quo–centric. Surely any such boosts within schools would favor the same areas and kids the current set up favors, and the rural and low-income schools that are lacking in computer science would continue to lag behind without specific efforts to widen access to technology education for the traditionally-underserved. It also entirely ignores the many and various free DIY online options available to anyone with sufficient bandwidth and some rudimentary knowledge, as well as non-profit organizations whose missions are dedicated to growing a diverse pipeline of future coders and computer engineers.
Policymakers and employers like Amazon can definitely make use of the statistics in this report, but here’s hoping that they look well beyond our nineteenth century school structure to build their high-tech future.
SOURCE: “Developing Careers of the Future: A Study of Student Access to, and Interest in, Computer Science,” Gallup and Amazon (October 2021).