Diane Ravitch, Mike Petrilli, and others have been raising questions about state test scores for some time, especially in New York. This article from yesterday's NYT, which reported that answering 44 percent of math questions correctly????on that stat's exam????sufficed for a passing grade, seems to only add fuel to their fires. ????(Worse, another recent report suggested that on some tests, guessing randomly would lead to a passing mark.)
Though these are obviously serious concerns, as I've written before, I think we can overdo our skepticism about assessments. For example, NY's response to the basic charge--we lowered the passing score this year because test questions were harder than before--is certainly plausible and worth investigation.
More generally though, while the pressures associated with AYP, keeping up with other states, etc. provide reason to look for score inflation, there are legitimately tough questions that testing leaders will always have to wrestle with. This list includes:
- * How hard are the questions?
- * What score should count as passing?
- * What score should count as "advanced" or "demonstrating mastery"?
- * How do you make sure the assessments reflect all content standards?
- * How do you know that your content standards are tough enough?
- * How do you ensure year-to-year comparability while guarding against predictability?