Key Trends in Special Education in Charter Schools: A Secondary Analysis of the Civil Rights Data Collection 2011-2012
Gadfly Bites: Ohio education news and opinion delivered to you, with a twist
Ohio’s charter school closure law is becoming irrelevant, and that’s a good thing
How parental overaspiration might undermine students' learning in mathematics
Ohio teacher evaluations: Fix them, don’t ditch them
Key Trends in Special Education in Charter Schools: A Secondary Analysis of the Civil Rights Data Collection 2011-2012
Gadfly Bites: Ohio education news and opinion delivered to you, with a twist
Ohio’s charter school closure law is becoming irrelevant, and that’s a good thing
How parental overaspiration might undermine students' learning in mathematics
Principles That Should Govern School Accountability: A Joint Statement from the California Charter Schools Association and the Ohio Office of the Thomas B. Fordham Institute
There has been much recent debate as to the utility in Ohio of a school accountability model similar to the one employed in California. During public policy debates like this one, the big picture can sometimes be obscured by the details. In an effort to raise the level of discussion, the California Charter Schools Association (CCSA) and the Thomas B. Fordham Institute (Fordham) have joined forces to co-write this commentary sharing our perspectives on the key principles that should govern school accountability policy.
Before digging in, it’s critical that we address some of the misperceptions that have emerged around the issue. First, Fordham does not necessarily endorse the views expressed by the guest commentators who submit articles to its blogs. CCSA has deep concerns about the accuracy of the analysis by Dr. Vladimir Kogan that was published by Fordham on November 16. This commentary is not intended to address these statistical matters; rather, CCSA addresses those issues on its own website.
Second, Fordham believes that the Similar Students Measure developed by CCSA is a robust measure that makes extremely good use of school-level performance data. Furthermore, Fordham does not question the use of the model in California. In fact, Fordham thinks that CCSA deserves a tremendous amount of credit for its work over the past seven years to model, pressure-test, and compare the Similar Student Measure’s results to the individual student data they see when they conduct deep-dive reviews of schools’ performance. They have engaged with hundreds of schools over several years in thorough qualitative reviews to assess the validity of their quantitative findings. CCSA engaged with education researchers in designing its measure and has issued numerous reports on the resulting performance of schools. When making important accountability decisions, authorizers and policy makers throughout California have relied upon information about how schools perform on this measure, given the characteristics of their student populations.
Third, CCSA has consistently stated that a growth model based on individual student assessment data is a critical component of any accountability system. CCSA has advocated for such a growth model to be developed in California (one that measures the growth of individual students over time, accounting for their prior-year test scores) and believes strongly that the resulting information would provide a far more precise measure of schools’ influence on the trajectory of student growth. That said, CCSA also believes that taking students’ demographic characteristics into account ultimately provides an important and useful lens through which to view school performance in California.
With those matters cleared up, we now turn to agreed-upon principles that should underpin school accountability in California, Ohio, and states across the nation. As organizations that have long played active roles in the development of their respective states’ accountability systems, CCSA and Fordham offer five key principles that are critical to assessing school performance and implementing a meaningful accountability system.
First, state context matters. Each state should develop its own school accountability system based on the data that are collected and made available; the capacity of state agencies to successfully implement accountability measures; the unique policy environments in which schools, including charters, operate; and the needs of policy makers, authorizers, school leaders, and parents. Because of state-to-state differences, the design of school accountability systems has to be done with a great amount of care and sensitivity to the realities of each state. For these reasons, a one-size-fits-all approach to school accountability between different states is not desirable. From the Fordham perspective, we find no reason to engage in the debates on California’s accountability system. We claim no expertise on the intricacies of California’s policies and practices, and we fully respect the on-the-ground leadership of organizations like CCSA when it comes to accountability policies in their state. Likewise, CCSA respects the leadership role of Ohio-based organizations that have helped to shape Ohio’s school accountability policies.
Second, using raw achievement alone is not appropriate when making high-stakes decisions. Achievement data can be highly influenced by students’ background characteristics, preventing sound judgments about the actual impact of a school on student learning. That is why many states, including Ohio, have adopted student growth measures that track individual student gains over time to assess the contribution of a school to student learning. In the absence of an individual student growth measure in California, CCSA developed the Similar Students Measure to create a fairer and more holistic picture of school performance there.
Third, school accountability must be fair and not create disincentives to educate disadvantaged students. Yet another danger of using only achievement-based accountability measures is that they could inadvertently punish schools that take on the mission-driven challenge of serving the most historically disadvantaged students. A state’s accountability framework should not incentivize schools to “cherry pick” the easiest-to-educate students to the exclusion of our underserved pupils. Accountability systems should instead balance incentives, clearly signaling the urgency of higher student achievement and also fairly evaluating schools by measuring their contributions to the growth of students who may come from disadvantaged backgrounds.
Fourth, closing persistently low-performing schools can improve quality, but it must be done carefully. Analyses by CREDO at Stanford University and by CCSA have highlighted the role of charter school closures in improving the performance of charter schools in California, with more than 175 charter schools closing between 2008–09 and 2013–14 (of which 60 percent of those with data were in the bottom quartile of performance, according to CCSA’s Similar Students Measure).[1] A recent Fordham report found that the closure of 198 district and charter schools in Ohio improved the academic outcomes of students who were displaced by those closures.
At the same time, closing schools must be done with great care. CCSA, for example, takes a multiple-measure approach to identify schools that appear to be underperforming according to publicly available data. CCSA combines a status (achievement) measure, a growth (improvement) measure, and a demographic control (Similar Students Measure); they are now adding in a post-secondary readiness measure to identify schools that appear to be underperforming according to publicly available data. They then engage with each of these schools in a multiple-measure review that examines dozens of state-collected and locally collected data points to ensure that any recommendation they make on closure advocacy is based on a careful assessment of school performance across a wide range of student outcome indicators and all grade levels. This is the context in which CCSA’s Similar Students Measure was developed—to be used as a tool to help identify the lowest-performing schools for which authorizers should take a deeper look before finalizing a renewal or closure decision. This measure is just one part of an interconnected system of performance analysis that renders a more holistic look into whether a school should be closed. In Ohio, state policy makers have established a default closure law, though it has only rarely come into effect (since its enactment in 2006, just twenty-four charters have been shut under default closure). In the overwhelming majority of cases, the hard and delicate work of closing a troubled school has been undertaken by Ohio’s charter authorizers and/or governing boards—the entities that are best positioned to make a decision on closure.
Fifth, the bar for charter school performance must be set high. We must expect great things and provide multiple lenses through which to view performance if we want to ensure that no good school is closed as collateral damage in the quest for better outcomes. Both CCSA and Fordham believe firmly in the promise of charter freedom and autonomy in exchange for rigorous accountability standards. If charter schools are not delivering strong results on behalf of their students, they should be closed based on an accurate and complete picture of their performance.
Policy makers, authorizers, advocates, and school leaders must expect great things from all students— regardless of family background, race or ethnicity, or zip code. That is why we must be strong on accountability while also being careful and thoughtful about its design. States, including Ohio, continue to wrestle with difficult questions about school accountability policies. In these debates, we urge policy makers to look toward the principles set forth here. If heeded, they can help set the terms of the debate and lead to policies that create the conditions for a better charter school sector in California, Ohio, and across the nation.
[1] Center for Research on Education Outcomes (CREDO), 2014, Charter School Performance in California, https://credo.stanford.edu/pdfs/ca_report_FINAL.pdf; CREDO, 2013, National Charter School Study, https://credo.stanford.edu/documents/NCSS%202013%20Final%20Draft.pdf; CCSA, 2014, Portrait of the Movement, http://www.calcharters.org/advocacy/accountability/portraitofthemovement/.
Ohio teacher evaluations: Fix them, don’t ditch them
As 2015 comes to a close, the long-awaited reauthorization of the Elementary and Secondary Education Act will likely soon become a reality. Among many proposed changes is the jettisoning of the federal waiver requirement mandating teacher evaluations. Before critics rejoice and demand an immediate end to the Ohio Teacher Evaluation System (OTES), it would be wise to remember why evaluations were instituted in the first place: Several research studies indicate that while teacher quality isn't the only factor affecting student achievement, it is a significant one. Ensuring that all students have a good teacher is a worthy and important goal; without a system to evaluate and differentiate effective teachers from ineffective ones, though, it is impossible to achieve. It’s also worth noting that many of the evaluation systems that existed prior to federal waivers—those that were solely observation-based—failed to get the job done. Teacher evaluations have come a long way.
That being said, Ohio’s system needs some serious work. Fortunately, fixing evaluation policies isn’t without precedent: In 2012, only 30 percent of Tennessee teachers felt that teacher evaluations were conducted fairly. In 2015, after the Tennessee Department of Education worked to refine the system, that number rose to 68 percent. Sixty-three percent believe the evaluation system has improved student learning, while 77 percent say they now understand how to use assessment data to improve their teaching. The Tennessee Department of Education reports that evaluation has made a significant positive impact on education outcomes: Since the launch of teacher evaluations, proficiency levels have grown at the elementary level in every subject area, and end-of-course exam performance has grown steadily since 2009–10 (except in English III). Additional data out of the state found that one of the working conditions associated with the retention of highly effective teachers is a functional evaluation. Fortunately, Ohio can accomplish the same thing. New flexibility from the feds should inspire Buckeye policy makers not to ditch the evaluation system, but instead to refine it into something that’s fairer for teachers, less burdensome for principals, and better at differentiating effectiveness. We’ve written before about how these systems must change and models that Ohio could adopt, but let’s take another look at two ideas that policy makers should consider when improving Ohio’s evaluation system.
Abandon SLOs and shared attribution
Ohio requires that all teacher evaluations include a student growth component, which consists of test results. For teachers with a valid grade- and subject-specific assessment, that means value-added measures. Unfortunately, only 34 percent of Ohio teachers actually fall into this category.[1] The remaining 66 percent are evaluated based on locally developed measures like Student Learning Objectives (SLOs) and shared attribution—both of which are poor ways to measure teacher effectiveness. Research shows that implementing SLOs in a consistent and rigorous manner is extremely difficult. A recent report from NCTQ found that they fail to effectively differentiate teacher performance. Apart from their questionable effectiveness, SLOs are also time-intensive; they require teachers to set long-term academic growth targets, measure progress toward those targets, and submit data in addition to their many responsibilities. Even worse, SLOs aren’t just a time-suck for teachers; a January report on testing in Ohio indicated that SLOs contribute as much as 26 percent of total student test-taking time in a single year.
Shared attribution, meanwhile, is the practice of evaluating teachers based on test scores from subjects other than those they teach. Despite what the name implies, it doesn’t actually ensure that teachers share accountability—just that core teachers with value-added data are responsible for the evaluation scores of non-core teachers (like gym, art, and music) in addition to their own. Talk about unfair.
So if SLOs are a time-intensive burden, shared attribution is unfair, and both fail to effectively differentiate teachers, what can policy makers replace them with in order to ensure that multiple measures are still used? The answer is to move this group of teachers to a fully observational evaluation. But in so doing, policy makers must also insist that observational practices are rigorous and not just the pro forma reviews that have too often occurred in the past. (This would apply not only to the teachers currently in the SLO/shared attribution system, but also to the observational component for the teachers in the value-added and vendor assessment systems.) Here are a few ideas to strengthen teacher observations:
Peer observations: Some of the best feedback I received as a teacher was from colleagues. Peers who work in the classroom, are familiar with the student population, and share a content/grade-level background are an untapped resource for effective evaluations. In fact, the Measures of Effective Teaching (MET) project found that administrators’ ranking of their own teachers was similar to those produced by peer observers. To ensure honest appraisals and protect teachers from feeling like they have to give positive reviews, districts could utilize another untapped resource: video-recorded lessons. Teachers from across a district (or even across the state) could be given extra planning time on a pre-selected date to watch videos of their peers teaching, study lesson plans and student work, and submit feedback in an anonymous fashion.
Student surveys: Some will argue that asking students to evaluate their teachers is a recipe for disaster; but it would be a serious oversight not to include feedback from those most affected by teachers, especially when research shows its benefits. An MET project brief found that student surveys are more likely than achievement gain measures or observations to demonstrate consistent results for teachers. In addition, the brief shows that student survey results are predictive of student achievement gains. It’s also worth noting that student surveys are already part of the alternative OTES framework.
Multiple observers: One the biggest complaints about OTES is the burden it imposes on principals, who are typically responsible for conducting observations. Employing multiple observers should lessen that burden—in addition to adding reliability. A policy brief from the MET project found that whenever a given number of observations was split between multiple observers, reliability was greater than what could be achieved by a single observer. In the case of teachers who received two observations, reliability increased more than twice as much when the second observation was conducted by a different administrator than the first.
Require outside observers
The second change Ohio policymakers should pursue is to require the use of outside observers. (Ohio law currently allows for, but does not require, evaluators other than the principal). A report from Brookings indicates that observations conducted by outside observers are more valid than observations conducted by school administrators. Using multiple observers is a commonsense approach: It decreases the chance for subjectivity and bias that is present with only one evaluator, and it gives schools a chance to capitalize on the skills of content experts, instructional coaches, curriculum coordinators, hybrid teachers, and assistant principals. Making use of outside observers (and peer observations) means that decreasing the number of principal observations won’t decrease the amount of feedback teachers receive or the reliability of their evaluation scores. That being said, it’s still vitally important for principals, as the instructional leaders of their schools, to take part in the observation and evaluation process.
***
If the ESEA reauthorization is a success, Ohio should be ready to reboot its teacher evaluation system. Waivers that required all teachers—core and non-core alike—to have an objective measure of growth caused a lot of headaches, and ESEA changes give Ohio a chance to make its system better. Ditching SLOs and shared attribution and replacing them with measures like peer observations and student surveys is a good place to start. So is the requirement of outside observers, which should lighten the load on principals and make the observational system more rigorous and more reliable. Of course, there are other reforms that policymakers could also consider. Selecting different measures for experienced versus inexperienced teachers; making sure that announced observations are balanced with unannounced ones; allowing principals to have more authority in determining both the benefits of excellent evaluations and the consequences of poor ones; and weighting measures equally are all ideas worth pursuing. The bottom line is that complaints about teacher evaluations in Ohio can be addressed without throwing away the entire system.
[1] The 34 percent is made up of teachers whose scores are fully made up of value-added measures (6 percent); teachers whose scores are partially made up of value-added measures (14 percent); and teachers whose scores can be calculated using a vendor assessment (14 percent).
Key Trends in Special Education in Charter Schools: A Secondary Analysis of the Civil Rights Data Collection 2011-2012
In light of a Hillary Clinton’s charge that charter schools “don’t take the hardest-to-teach kids,” as well as the lambasting of one of the nation’s highest-performing charter networks for its discipline practices, this report from the National Center for Special Education in Charter Schools is especially timely. As it reveals, the worst of the recent allegations fall flat (at least when it comes to students with disabilities). Charter schools do have slightly lower percentages of students with disabilities compared to traditional public schools (we should note that the discrepancy is nothing like the gap that some charter opponents allege), but they also tend to provide more inclusive educational settings for those students. Suspension rates in the two sectors are roughly the same.
The study’s authors investigate whether anecdotes about charter schools failing to serve students with disabilities align with the actual data. They examine enrollment, service provision, and discipline statistics, made possible through a secondary analysis of data from the Department of Education’s biennial Civil Rights Data Collection for the 2011–12 school year (the most recent one for which data is available). Nationwide, students who receive special education support and services made up 10.4 percent of total enrollment in charter schools, compared to 12.6 percent in district schools. The authors note that “charter schools have room to improve,” especially in states with wide discrepancies (e.g., New Jersey and Oklahoma). (Ohio is not among them, with charter schools enrolling at least 3 percent more students with disabilities than traditional schools). But they caution that closing the gap shouldn’t necessarily be a “universal goal,” as some state funding systems provide incentives that result in districts over-identifying students with disabilities. Encouragingly, the enrollment gap has shrunk: A 2008–09 report from the Government Accountability Office found that students with disabilities constituted just 7.7 percent of charter enrollment (versus 11.3 percent in district schools).
Perhaps more importantly, charter schools tend to place students with disabilities in “high-inclusion settings” (defined by whether a student spent 80 percent or more of the day in regular education). Charter schools placed 84 percent of their students with disabilities in such settings, compared to traditional schools’ placement rate of 67 percent. Finally, there is no evidence that charter schools suspend students with disabilities more frequently. Neither charter schools nor traditional public schools expel students with disabilities at a high rate (0.55 percent for charters versus 0.46 percent for district schools), but charter rates are slightly higher—perhaps driven by the fact that they have slightly higher expulsion rates overall, including for students without disabilities.
The report concludes with a handful of policy recommendations, focused mainly on ensuring that data collection efforts continue, and that state education agencies and authorizers rigorously monitor enrollment practices and service provision among all schools. The CRDC dataset has “methodological limitations”—some schools had incomplete information as a result of the DOE concealing enrollment numbers to protect student privacy (which was more common among smaller schools, possibly skewing the data set). And the suspension and expulsion data was non-standardized and self-reported by schools. Still, over 80 percent of traditional schools and 60 percent of charter schools were captured overall, and the comparisons are useful. While the study lives up to its goal to provide “practitioners and researchers with a solid foundation” of data to inform discussions typically fueled by rhetoric, further study is warranted. Advocates should examine how well all schools are serving students with disabilities (beyond enrolling them and placing them in high-inclusion settings), explore cost-saving mechanisms for charter schools, and offer further case studies of specialized charter schools innovating to uniquely meet the needs of this vulnerable student group.
Source: Lauren Morando Rhim, Jesse Gumz, Kelly Henderson, “Key Trends in Special Education in Charter Schools: A Secondary Analysis of the Civil Rights Data Collection 2011-2012,” National Center for Special Education in Charter Schools (October 2015).
Gadfly Bites: Ohio education news and opinion delivered to you, with a twist
We are inundated with news every day, and parsing what’s worth a look and what’s plain worthless takes time and energy. Quite honestly, you probably have better things to do. Fortunately for you, Fordham offers a thrice-weekly news service that is personally researched, curated, and annotated with Ohio’s education reform interests in mind. You might not think you want—let alone need—another news clip email appearing in your inbox, but Gadfly Bites is different, providing two parts news and one part snark.
For example: A story in the Akron Beacon Journal may discuss local transportation issues with a busload of unacknowledged slant. At the same time, a story in the Cleveland Plain Dealer may discuss an unexpected but welcome rise in an urban school district’s student population without realizing an even more important positive outcome in it. Gadfly Bites not only highlighted those two stories as part of the day’s news but also told you what they’re about and found a vital connection that might not occur when reading the pieces individually. And those were just two of the stories featured in a recent Gadfly Bites edition that highlighted other stories from Cincinnati and Columbus as well.
Sure, you can always get Gadfly Bites editions on the Ohio Gadfly Daily blog. And via Twitter too. But why not skip those extra steps and have it delivered to your Inbox every Monday, Wednesday, and Friday? You can sign up for email delivery today.
You’ll be glad you did, or maybe you won’t. You can’t tell if you don’t subscribe.
Ohio’s charter school closure law is becoming irrelevant, and that’s a good thing
Ohio is one of fifteen states with an automatic closure law for low-performing charter schools, meant to serve as a minimum floor for performance and clean up the sector during an era when bad schools proliferated and authorizers failed to close them.[1]
Ohio’s academic death penalty for charter schools has been described as the “toughest in the nation.” In reality, it’s had minimal impact on either the number of schools closed or the number of students affected. A current three-year safe harbor on closure (among other sanctions) makes it all the more anemic. In its early days, it may have motivated some charter school authorizers to intervene and prevent their schools from facing a similar fate, but it hasn’t curbed poor oversight decisions among some authorizers in the nine years since the law was enacted.
Even so, accountability advocates needn’t be concerned or press for a stronger closure law. All in all, Ohio is a case study for how a minimum performance threshold for charter schools by itself doesn’t lead to wide-scale sector improvement. Our experience shows that direct state intervention cannot accomplish much and that strong accountability controls on charter school overseers, which took Ohio nearly eighteen years to put in place, are of central importance.
Impact of Ohio’s closure law
So far, twenty-four schools in Ohio have closed as a result of the state’s closure law. But this is a small fraction of the overall number of charter closures (approximately 210 since 2000). The vast majority have occurred through the actions of authorizers and/or governing boards, the entities that should be closing charter schools (when necessary).
The closure law had an immediate impact in its early days. More than two-thirds of the closures brought on by the law occurred within four years of its passage—a testament to the lenient climate in the late 2000s and the original need for the legislation. It is further worth noting that these closures occurred when the law was at its weakest, requiring three consecutive years of low performance before mandating closure. A school could escape unscathed merely by achieving “academic watch” (a D grade) once in its preceding three years.
Table 1: Charter schools closed under Ohio’s automatic closure law
[[{"fid":"115206","view_mode":"default","fields":{"format":"default"},"type":"media","link_text":null,"attributes":{"style":"font-size: 13.008px; line-height: 1.538em; height: 569px; width: 500px;","class":"media-element file-default"}}]]
Data come from the Ohio Department of Education’s annual community schools report (2014) and the closed community school directory found on ODE’s website.
Very few charter schools have fallen prey to the automatic closure law in recent years, and only four currently sit on the state’s “watch list.” Yet it has attracted much attention from friends and foes alike. On the one end of the spectrum are free market supporters who believe that parental choice is sacrosanct, that Ohio has the most draconian closure law in the land and that it wreaks havoc on families and students exercising their right to choose a safe environment (even if it happens to receive an F academic rating year after year). On the flip side, charter opponents have urged the legislature to “accelerate the process” of closing failing charter schools, despite the fact that current law enables relatively swift closure. Poor performers can be shut based on just two years of performance data.
Automatic closure is not nearly as important as the vital work of ensuring that charter school overseers are exhibiting responsible oversight during all phases of a charter school’s life cycle—pre-birth, beginning, middle, and (occasionally) end. Fortunately, recent changes to Ohio’s charter school law finally place stronger controls on authorizers and empower them to intervene so the state doesn’t have to. New provisions will revoke authorizing rights from poorly rated groups, restrict ineffective authorizers from opening new schools, incentivize high-performing authorizers, and clean up conflicts of interest (like allowing authorizers to profit from selling services to schools). This will result in more quality control at the front end of the charter life cycle and stem mass openings of poorly vetted schools, as have occurred in recent years. Rigorous new evaluations of authorizers will motivate them to close failing schools before they sink low enough to hit the watch list or activate the state’s closure criteria.
Ohio’s closure law once provided a much-needed tool for the state to close poor performers during a time when many authorizers allowed them to languish. In light of new reforms, it may soon become irrelevant. If charter overseers are doing their jobs right, we should hope that it will.
[1] Ohio’s closure law, HB 79, passed in December 2006. But that same law also put into place perverse incentives for authorizers, inasmuch as they would lose one school slot (out of a possible fifty or seventy-five, depending on the current cap) for every school they permanently closed. This no doubt played a role in sponsors’ reluctance to close schools in the late 2000 (and was later lifted from law).
How parental overaspiration might undermine students' learning in mathematics
Is there such a thing as too much parental involvement in a student’s education? Lack of parental involvement is often cited anecdotally as an impediment to student achievement. On the other hand, so-called “helicopter parents” can run their children’s education like drill sergeants. The goal is educational and occupational success, but there is increasing concern that such intense involvement could instead lead to dangerous dead ends. A new study in the latest issue of the Journal of Personality and Social Psychology adds much-needed data to the discussion. (Disclaimer: The study is from Germany, so mind the culture gap.)
There have been a number of studies over the last forty years looking at parents’ aspirations for their children, which is a useful way for psychological and sociological researchers to measure parental involvement. However, the current study’s authors noted two gaps in previous research. First, temporal ordering of effects was not generally considered (i.e., it was assumed that parents’ involvement led to certain academic outcomes in the future, but the current research supposes that kids’ past achievement could lead to more/different parental involvement in the future). Moreover, little effort was made to separate parental aspiration (“We want our child to obtain this grade”) from parental expectation (“We believe our child can obtain this grade”). Often only one or the other question was considered, or both were conflated into one measure. The current study not only looked at those questions separately, but made the difference between them its subject. Remember that point; it’s important later.
The researchers carried out a five-year study with 3,530 students and their parents in Bavaria. It included the annual German math test for the kids and a survey for the parents. Parental aspiration and expectation were measured via separate questions as worded above, both on a scale of 1–6 . While a majority of parents in the first year expressed aspirations that matched their expectations, more than 30 percent of parents reported significantly higher aspiration than expectation for their children—a gap called “overaspiration.” Parental overaspiration was negatively correlated with students’ math achievement in that first year.
Resurveying parents on expectation and aspiration each year attempted to account for temporal ordering (“resetting” expectations and aspirations before the test each year); overaspiration did tend to lessen over time. But it never disappeared entirely. Remember that overaspiration is the gap between aspiration and expectation. And despite some tempering, many parents continued to have far higher aspirations for their children’s success than expectations that their children actually would reach that aspired-to score. Call it hope, call it delusion, call it whatever you want: Overaspiration persisted throughout the study. We know that high expectations are generally good for kids, but the mismatch seems to have cut against both parent and child in this study, characterized by the researchers as “poisonous” to math achievement.
These findings appear to apply to students regardless of achievement level. The German education system includes separate schools for students on three “tracks” based on their entry-level academic ability, and the aspiration study included proportional numbers of students from all three tracks. The findings were the same. Bearing in mind that previous studies conflated expectation and aspiration data, the researchers also ran the same differentiated analysis with comparable variables in another data set—from the Educational Longitudinal Study conducted in the U.S. by the National Center for Educational Statistics from 2002 to 2004—that closely replicated the findings from Germany.
The researchers raise a number of questions needing further study, especially regarding the way aspirations and expectations are made manifest between parents and children (not to mention the possibility of a similar effect between teachers and students), but this study seems to suggest that parental involvement may have limited effectiveness on academic outcomes if it isn’t based on a realistic balance of expectation and aspiration.
SOURCE: Kou Murayama et al., “Don’t Aim Too High for Your Kids: Parental Overaspiration Undermines Students’ Learning in Mathematics,” Journal of Personality and Social Psychology (November 2015).