Many education stakeholders see the Every Student Succeeds Act (ESSA) as an opportunity to fix the most problematic provisions in NCLB. For many critics, the biggest bogeyman was too much standardized testing and its associated accountability measures. While ESSA maintains the annual testing requirements, it also offers new flexibilities. Among these is the opportunity to apply for the Innovative Assessment Pilot (IAP).
IAP is a provision that permits states to pilot an innovative assessment system in place of a statewide achievement test. “Innovative” is an umbrella term that covers a plethora of different testing options, including (but not limited to) competency-based, instructionally embedded, and performance-based assessments. Regardless of the assessment type chosen by a state, it must result in an annual, summative score for a student. Authority to participate in the pilot—known as “demonstration authority”—will be granted through an application process run by the secretary of education. No more than seven states will be allowed to participate in the pilot for a period of up to five years, with the option to apply for an additional two-year extension.[1]
Folks who are worried that states might use the pilot to weaken state accountability systems will be happy to learn that ESSA establishes guardrails that make that unlikely. As part of the application process, states must demonstrate how they will “validly and reliably aggregate data from the innovative assessment system for purposes of accountability,” specifically the new law’s statewide accountability requirements. The results must also be valid, reliable, and comparable “as compared to the results for such students on the state assessment.”[2] Does that mean that students are double testing—taking both the statewide assessment and the innovative assessment—during the pilot? Yes and no. According to a recent blog post in Education Week, the department’s proposed regulations give states four options for comparing their pilot assessment to their previous statewide test:
- States could give the state test once per grade span (but not every grade) in which students take an innovative test (like New Hampshire).
- States could give both the state test and the innovative test in certain grades, but they aren’t required to give both tests to every student—they could administer the state test to a representative group.
- States could utilize similar questions or items on both the state test and the innovative test.
- States could create their own equally rigorous comparability measure.
States can opt to initially run the pilot in a subset of districts rather than statewide (proposed regulations also permit states to focus on a certain grade or a certain subject). However, the innovative assessment system must be scaled statewide by the end of the pilot period, and states must prove throughout the course of the pilot that there is a “high-quality” transition plan for statewide implementation in place.
The requirements for statewide scalability and inclusion in the statewide accountability system might be two serious deterrents for states that were only halfheartedly considering an application. The fact that there may not be any federal funding to help states implement the pilot is another drawback. And for those brave remaining states that are still interested, the extensive application process could change their thinking. The application’s basic requirements include descriptions of the innovative system a state plans to use, experience the state has with all components of the system, and the planned timeline. Sounds easy enough, right? But check out a few additional items that states must demonstrate in their applications:[3]
- The system must generate results that are valid, reliable, and comparable for all students and subgroups.
- The system must be developed in collaboration with teachers, school leaders, local districts, parents, civil rights organizations, and stakeholders that represent the interests of students with disabilities, English language learners, and other vulnerable students.
- The system must annually assess the same percentage of students and subgroups enrolled in schools under IAP that would be assessed under other state testing requirements.
- States must describe how they will support teachers in developing and scoring pilot assessments.
- States must describe how they will solicit regular feedback from teachers, school leaders, and parents and how it will respond by making needed changes.
So what about Ohio? Should the Buckeye State roll up its sleeves and dive into the IAP application? Some education advocacy groups have said yes, and with good reason. Ohio is already part of the Innovation Lab Network, which aims to implement student-centered learning approaches. Ohio law already permits the state superintendent to grant waivers to schools interested in piloting an alternative assessment system. The state also boasts a competency-based education pilot and the Ohio Performance Assessment Pilot Project. In short, Ohio seems like the perfect state to take on IAP.
But a solid foundation doesn’t always indicate that it’s time to build a house—and Ohio’s work with innovative assessments doesn’t mean that the state should jump at participating in IAP. Ohio schools are still reeling from administering three separate statewide assessments in as many years. Safe harbor has been in effect since 2014–15, which was the same year that our current standards were first implemented in full. Our current accountability system has reported on, but never actually held schools accountable for, those state standards and their aligned assessments. And speaking of state standards, the Ohio Department of Education (ODE) is presently revising them. According to ODE, schools will be transitioning to revised math and ELA standards during the 2017–18 school year—the same year that IAP could be starting. Add to that the myriad other changes that are quickly approaching with ESSA, and the Buckeye State looks to have plenty on its plate in the coming years even without the huge undertaking of IAP.
To be clear, I’m not saying that Ohio can’t successfully pilot an innovative assessment system. I’m saying that maybe we shouldn’t—at least not yet. The IAP provisions state that three years into the program, the Institute of Education Sciences (IES) must publish a report that examines whether the innovative assessment systems have been successful. The findings from this report will be used to establish a peer review process that will extend the pilot to additional states. Ohio could be one of the first states to apply for the second round of IAP demonstration authority. In the meantime, ODE could focus on getting the rest of ESSA implementation right.
Passing on the first round of the pilot doesn’t mean that Ohio has to abandon its work with innovative assessments either. While gathering stakeholder input for ESSA implementation, the department could gauge interest and ideas for an innovative assessment without the pressure of a looming application deadline and the potential of increased federal oversight in state assessment policy. The results from Ohio’s competency-based education pilot and the Ohio Performance Assessment Pilot Project could be gathered and examined, and Ohio could watch and learn from PACE in New Hampshire and the first round of IAP states.
In short, this may be a good time to observe rather than act. Supporters of innovative assessments often value them because they’re different from standardized tests. But opting for something different also means losing out on the features that made standardized tests so popular in the first place—they’re cheap, reliable, easy to administer, and they make comparing results simple. If we’re serious about making innovative assessments work for Buckeye students, we have to pick the right time to invest in them—not just jump at the first chance we get.
[1] States are permitted to apply as part of a consortium, but a consortium can’t exceed four states, and the limit to the total amount of states participating in the pilot remains the same. The secretary of education determines the official start of the pilot, though 2017–18 is the earliest date.
[2] Although states are permitted to use the results from their innovative assessment as part of their accountability system, they are not required to do so—at least not initially. The USDOE’s proposed regulations regarding IAP make clear that the purpose of the program is to eventually use the innovative assessment system to “meet the academic assessment and statewide accountability system requirements” of Title I.
[3] This list does not include all of the IAP application’s requirements.