The Chronicle of Higher Education has an important article on pushback from the education research community concerning the use of randomized studies of education interventions. (We learn inter alia that less than 10 percent of members of the American Education Research Association "are knowledgeable about randomized trials.") The complaints are familiar: education is too complex to study in a rigorous way, turning research into policy is difficult to do well, etc. At heart, it's the old argument: does one dare to boil something as complex and subtle as education down to grades, scores, and numbers? The problem with this argument is that the kernel of truth it contains makes the falsehood that pervades it more difficult to combat, and all the more pernicious. Of course education is different from medicine or finance. Of course psychometrics is inherently imperfect, since it deals with humans, who will often act against their own interests narrowly conceived. And of course it's hard to translate data into policy. No one but the most extreme quantifiers has ever said differently. That is no reason not to avail ourselves of the tools that rigorous research and verifiable data put at our disposal in crafting effective interventions. Moreover, the stated objections are not the real heart of the matter. Bluntly stated: Those who oppose rigorous social science research in education do so not because of their concerns about methods but because of their concerns about what that research is revealing about student performance - and what those revelations imply about programs and policies the selfsame opponents have long advocated.
"No classroom left unstudied," by David Glenn, Chronicle of Higher Education, May 28, 2004