Efforts to diversify the roster of students classified as gifted often focus on race and ethnicity. Many such efforts seek a single fix for what’s seen as the problem of under-identification. But a recent study examining gifted identification processes among rural elementary schools finds that relying too much on one screening measure may not adequately identify those students capable of high academic performance.
The study is part of a larger project evaluating an elementary-level English language arts curriculum designed for gifted students, and focuses both on how those students get identified, and their subsequent performance on assessments. Twelve low-income rural districts participated—including farming, mining, and fishing communities. Eleven were in Virginia and one in the Appalachian area of Kentucky, with the total sample comprising more than 4,500 second-graders.
Students were first identified via established practices (a.k.a. the “district-identified sample”). Those practices varied a great deal and included the use of universal screening in some districts. Mostly, however, districts used cutoffs based on national norms for identification, such as on the Naglieri Nonverbal Ability Test (NNAT), which is intended to assess cognitive ability independently of linguistic and cultural background (such as looking at patterns of shapes and colors). But none used local norms, nor did any provide training in the use of teacher ratings to identify gifted learners (which is where the alternative identification below comes in).
After the districts had completed their usual process, analysts worked with them to enroll a second group of students known as the “project-identified sample.” This time they administered two universal screeners to all second-graders in the sample. One was a well-known three-part subscale that separately assessed motivation, creativity, and reading. Teachers were trained on how to use the scales, given six months to observe students, and then rated all of their students on those three components. The second universal screener was the Cog-AT Verbal test, which again all students took. After all rounds of screening were completed, districts were provided data on all students performing above the 75th percentile on both national and local district norms for the Cog-AT and performing one or more standard deviations above the respective norms on the three teacher-rated scales.
In the end, about 760 students were identified using either method—roughly 60 percent through the project’s method and 40 percent through the district’s customary method. The former identified 15 percent of minority groups (Black/Hispanic), while the latter identified 8 percent of minority groups.
Not all of the identified students ended up receiving gifted services (just about 500). The districts weren’t told about the measure or cutoffs used for placement on the data listing, so leaders presumably made the final decision about the number of additional students based on both the identification data and their available resources.
The analysts used multilevel regression models with district fixed effects to determine how, if at all, students in the two identified groups differed prior to receiving gifted services. After controlling for gender, they found that the project-identified students on average scored 2.6 points higher on the Cog-AT than did the district-identified kids. There was no difference between the two groups when it came to the motivation, creativity, and reading scale ratings. And whether a district was also using universal screening was not statistically significant when it came to the ratings. Nor did the two groups differ on their scores on the Iowa tests of achievement, which were administered as pretests.
In terms of post-test measures, students identified through the project process outperformed students identified through the district process by a modest 0.16 of a standard deviation on both the Written Expression part of the Iowa subtest and the total Iowa assessment score (used as post-tests, too). There were no statistical differences between the groups when it came to the Reading and Vocabulary Iowa subtests, the two writing tasks they gave the students, and the four unit assessments based on the ELA curriculum that teachers taught.
The analysts conclude that wide use of national norms on the nonverbal test may limit the gifted pool (that’s a point of contention), while performance identified via multiple routes—in this case, teacher rating scales, two universal screeners, and judgements based on performance on both—demonstrates that students can do just as well or better when identified through local norms. That said, the specific focus of this study on ELA in rural areas limits its generalizability. Still, the outcomes of the study—finding a larger and more diverse pool of students and equal or slightly better academic performance among differently-identified students—are promising enough to warrant greater investigation in other settings.
SOURCE: Carolyn M. Callahan et al., “Consequences of Implementing Curricular-Aligned Strategies for Identifying Rural Gifted Students,” Gifted Child Quarterly (2022).