A new study published in Justice Quarterly by Thomas Mowen, John Brent, John and Boman tries to quantify the effect of suspensions on students’ odds of criminal justice involvement. However, like its many predecessors, it comes up short.
Based on data from the National Longitudinal Survey of Youth (NSLY), which was repeatedly administered to 8,984 youth between the ages of twelve and sixteen between 1997 and 2000, the authors conduct a “Hierarchical Generalized Linear Model” analysis that nests time “within” individual students.
Overall, they find that “an individual is 157 percent more likely to report an arrest each year they are suspended relative to a year in which they are not suspended.” (That’s the “within” bit.) Similarly, they find that “an individual who is suspended relative to an individual who is not suspended is 417 percent more likely to report having been arrested.” Finally, they find that students who are suspended more often are far more likely to be arrested than students who are only arrested once.
For many reasons, those numbers are interesting, as is the theoretical framework for the study, which focuses on the potential importance of “turning points” (such as getting suspended) in the life of individuals. However, the authors’ get way out over their skis by claiming to have isolated the “effect” of suspensions on students’ odds of criminal justice involvement.
First, as the authors acknowledge, because the NSLY is a household-based survey, they can’t control for differences between schools, which matters because schools with higher suspension rates are usually located in neighborhoods with higher arrest rates.
Second, although they try to control for individuals’ underlying delinquency (which is the most obvious reason that suspensions and arrests are so strongly correlated), their measure of “delinquency” is problematic. In their words:
The NLSY97 asked youth to report how often they have engaged in any of the following activities within the previous 12 months: selling drugs, carrying a gun, belonging to a gang, destroying property, stealing an item worth less than US$50, stealing an item worth more than US$50, committing any other property crime, and attacking or assaulting someone. To create a single dimension to capture delinquency, we summed each of these seven measures to generate a count of the total number of delinquent acts engaged in over the prior year.
So yes, a student who steals a candy bar and a student who brings a gun to school and are assumed to be equally “delinquent.” Which is a little crazy. (As is the notion that delinquent individuals keep detailed records of their misdeeds so they can report them accurately.)
Third and arguably most important, the authors simply assume that students who reported being suspended and arrested in the same year were arrested because they were suspended. But a year is a long time. And as far as I can tell, they have no way of excluding students who were suspended after being arrested (but within the same year). Moreover, “it is possible that youth who are suspended in school were also arrested in school” (which would certainly undermine the argument that pushing kids out of school makes them more likely to be arrested).
As these examples demonstrate, ultimately the authors have no convincing way of linking suspensions and arrests to one another, even when they do come in that order. Which is just inherently problematic. (For example, suppose that a student was suspended in January and arrested in December. Should we really assume a causal link between the two?) And unfortunately, despite their claim to have “forced temporal ordering by dropping all cases where the youth reported receiving an arrest prior to receiving a suspension,” it seems that some version of this problem also carries over to the authors’ second approach, wherein they claim to establish that students’ risk of arrest is first doubled, then tripled, then quintupled by their second, third, and fourth suspensions (or, to be more precise, by reporting one or more suspensions in two, or three, or all four years of the study, since suspension is binary within each of these years).
Since the authors only drop seventy-nine respondents as a result of the aforementioned “temporal ordering,” it seems that they are not dropping students who reported being suspended and arrested in the same one-year period. In other words, they are once again assuming a causal link between suspension and arrest—even if a student who is first suspended as a freshman isn’t actually arrested until his or her senior year.
In short, when push comes to shove, all this study really demonstrates (yet again) is that suspensions and arrests are highly correlated. In other words, students who are misbehaving in school are also misbehaving outside of it.
Which isn’t really that surprising.
SOURCE: Thomas J. Mowen, John J. Brent, and John H. Boman IV, “The Effect of School Discipline on Offending across Time,” Justice Quarterly (July 2019).