Chances are, you’ve heard something in the past year about test mania. Everyone from superintendents to parents to retired educators has an opinion; even Secretary of Education Arne Duncan suggested tests and test prep are dominating schools. Given all this attention, one might assume that students spend hundreds of hours each year taking tests—perhaps even more time than they spend actually learning. A recent report from Ohio Schools’ Superintendent Richard Ross paints a very different picture.
The report, required by state law, reveals that Ohio students spend, on average, almost twenty hours taking standardized tests during the school year. (This doesn’t include teacher-designed tests, but does include state tests.) Twenty hours is a good chunk of time, but when one considers that the school year in Ohio is about 1,080 hours total (it varies by district and grade level), that means testing only takes up about 2 percent of the year. (Report results show that students spend approximately fifteen additional hours practicing for tests, but this additional time only raises the total percentage to 3 percent).
Regardless of this small percentage, critics of standardized testing make some valid points. No one wants quality, in-depth learning to be pushed aside for superficial test prep, and a strong accountability system doesn’t have to mean a test-saturated system. That’s why Superintendent Ross’s report is so beneficial: While it reinforces testing’s role in monitoring and improving student achievement, it also makes recommendations for limiting the time spent taking and prepping for tests.
The idea of reducing testing is popular, but exactly what testing should be reduced? The report recommends eliminating student learning objectives (SLOs) as part of the teacher evaluation system for teachers in grades pre-K–3 and for teachers teaching in non-core subject areas in grades 4–12; and also eliminating the fall third-grade reading test. Let’s look at each of these.
The Ohio Teacher Evaluation System (OTES) requires that between 42 and 50 percent of a teacher’s evaluation be based upon objective academic growth measures. Since state assessments and their resulting data only cover certain grades and subjects, other means of estimating a teacher’s impact on learning have been developed for teachers in subjects like art, music, and gym. One of the most common is the student learning objective. The Ohio Department of Education defines a student learning objective as a measurable, long-term academic growth target that a teacher sets for students at the beginning of the year. SLOs are more than a stated learning objective: They involve a fall pre- and spring post-test for students; further, they detail how that target growth will be measured over time and why that level of growth is appropriate. According to the report, SLOs contribute as much as 26 percent of total student test-taking time. Replacing student learning objectives makes sense. What’s harder to buy is the report’s recommendation for the expanded use of shared attribution.
Shared attribution is the practice of evaluating teachers based on test scores from subjects other than those they teach. For example, ODE recommends using a building or district’s overall value-added rating as a shared attribution measure. In other words, the process assigns a non-core subject teacher (music, physical education, art, etc.) an evaluation score based on how well students perform in their core classes (like English and math) with other teachers. Ohio Federation of Teachers President Melissa Cropper made a valid point when she stated that shared attribution doesn’t determine if all the “shared” teachers are effective—it only points to the effectiveness of the core teachers. The purpose of teacher evaluations is to separate effective and ineffective teachers. Shared attribution, however, does the opposite.
Of course, ODE and Superintendent Ross are constrained by state law, which—as mentioned previously—requires at least 42 percent of a teacher’s evaluation to be based on an objective academic growth measure. Objective is the key word here, since it suggests that growth must be measured by tests (objective) instead of classroom evaluations (subjective, since they’re given by the principal). Maybe the testing burden of SLOs and the trouble with shared attribution makes it worth questioning why we don’t trust principals to evaluate teachers the way we trust supervisors in other fields to evaluate their employees. The legislature should be open to changing the law. Teachers are right to express concerns about being evaluated based on a colleague’s test scores rather than their own.
The report also suggests eliminating the fall third-grade reading test. This test is the first of many opportunities for third graders to demonstrate their reading proficiency as part of the Third Grade Reading Guarantee. Ross notes that, although Ohio has administered the third-grade reading test twice a year for the past decade (long before students were required to read on grade level as a condition of moving on to fourth grade), it is “impractical” to administer Ohio's new tests (which have two parts instead of one) within the first two months of the school year. The practicality argument is a good one, particularly given how important it is to establish routines for students in those early months and how many other diagnostic tests occur in that time frame. In fact, eliminating the fall third-grade reading test reduces test time by 4.75 hours. Students will still have an additional chance to pass the test in the summer, and districts still have the option of using a state-approved alternative test throughout the year as a way to gauge student progress; in other words, teachers won’t sacrifice vital performance data, and students will still have more than one chance to pass, while overall testing time will decrease.
Furthermore, this elimination offers a solution to the misleading headlines that accompany the release of fall scores. When fall 2013 scores were released, they were accompanied by gloom-and-doom headlines from the press. Unsurprisingly, when spring results came out, many media outlets focused on the increase in passage rates from fall to spring. What’s misleading about this emphasis is that it fails to take into account the nature of learning during a school year. Of course more third graders are going to pass the test in the spring than in the fall—by the spring, they’ve had several more months of schooling under their belts. Releasing fall score reports so that the media can whip up a premature fear-of-retention frenzy doesn’t do kids any favors. It only stirs up false fears. Teachers should, of course, assess their students along the way, monitor students’ progress and needs, and share that information with families. But having a formal assessment only two months into the year, accompanied by data that measures students who are closer to the end of second grade than the end of third grade? Unfair and unneeded.
***
In the coming months, it will be interesting to see how the legislature, educators, and other stakeholders react to Ross’s recommendations. If there’s a good-faith effort to maintain accountability while limiting redundant and unnecessary testing, students will benefit—and parents will be relieved. However, the growing undercurrent to remove all testing raises questions about who stands to benefit in such a scenario. Let’s not forget that while students deserve an education that isn’t consumed by standardized tests, they also deserve schools that are held accountable for living up to their responsibility of providing an excellent education.