So far, I am leery of both sets of official tests for the Common Core, at least in English language arts (ELA). They could endanger the promise of the Common Core. In recent years, the promise of NCLB was vitiated when test prep for reading-comprehension tests usurped the teaching of science, literature, history, civics, and the arts—the very subjects needed for good reading comprehension.
In an earlier Huffington Post blog, I wrote that if students learned science, literature, history, civics, and the arts, they would do very well on the new Common Core reading tests—whatever those tests turned out to be. To my distress, many teachers commented that no, they were still going to do test prep, as any sensible teacher should, because their job and income depended on their students’ scores on the reading tests.
The first thing I’d want to do if I were younger would be to launch an effective court challenge to value-added teacher evaluations on the basis of test scores in reading comprehension. In the domain of reading comprehension, the value-added approach to teacher evaluation is unsound both technically and in its curriculum-narrowing effects. The connection between job ratings and tests in ELA has been a disaster for education.
The scholarly proponents of the value-added approach have sent me a set of technical studies. My analysis of them showed what anyone immersed in reading research would have predicted: The value-added data were modestly stable for math but fuzzy and unreliable for reading. It cannot be otherwise, because of the underlying realities. Math tests are based on the school curriculum. What a teacher does in the math classroom affects student test scores. But reading-comprehension tests are not based on the school curriculum. (How could they be if there’s no set curriculum?) Rather, they are based on the general knowledge that students have gained over their life span from all sources—most of them outside the school. That’s why reading tests in the early grades are so reliably and unfairly correlated with parental education and income.
Since the results on reading-comprehension tests are not chiefly based on what a teacher has done in a single school year, why would any sensible person try to judge teacher effectiveness by changes in reading comprehension scores in a single year? The whole project is unfair to teachers, ill conceived, and educationally disastrous. The teacher-rating scheme has usurped huge amounts of teaching time in anxious test prep. Paradoxically, the evidence shows that test prep ceases to be effective after a few lessons. So all that time is wasted, time during which teachers could be calmly pursuing real education, teaching students fascinating subjects in literature, history, civics, science, and the arts, the general knowledge that is the true foundation of improved reading comprehension.
The villains in this story are not the well-meaning economists who developed the value-added idea but, rather, the inadequate theories of reading comprehension that have dominated the schools, principally the unfounded theory that when students reach a certain level of “reading skill,” they can read anything at that level. We know now that reading skill—especially in the early grades—varies wildly depending on the subject matter of the text or the test passages.
The Common Core tests of reading comprehension will naturally contain text passages and questions about them. To the extent that such tests claim to test “critical thinking” and “general” reading-comprehension skill, we should hold onto our wallets. They will be only rough indexes of reading ability—probably no better than the perfectly adequate and well-validated reading tests they mean to replace. To continue using them as hickory sticks will distract teachers from their real job of enhancing students’ general knowledge and will encourage teachers to continue doing the wasteful sorts of unsuccessful skill exercises that our classrooms have already been engaged in.
The solution to the test-prep conundrum is this: First, institute in every participating state the specific and coherent curriculum that the Common Core standards explicitly call for. (It’s passing odd to introduce “Common Core” tests before there’s an actual core to be tested.) Then base the reading-test passages on those knowledge domains. Not only would that be fairer to teachers and students, it would encourage interesting, substantive teaching and would, over time, induce a big uptick in students’ knowledge—and hence in their reading-comprehension skills. That kind of test would be well worth prepping for.
A version of this post originally appeared on the Huffington Post.