At its simplest, the belief gap is the gulf between what students can accomplish and what others—particularly teachers—believe they can achieve. It is especially pernicious when beliefs around academic competency are fueled by extraneous information such as socioeconomic status, race, or gender. All too often, the assumption of low academic ability on the part of adults becomes actual underachievement in young people. A new study looks at one simple possibility to mitigate extraneous information and remove the assumptions: using demonstrated academic ability.
The data on which the belief gap analysis is based was collected during a separate study on the efficacy of an online student evaluation platform called Assessment-to-Instruction (A2i) in several elementary grades. A2i uses regular, ongoing student assessment to not only track a student’s progress through a literacy curriculum, but also to help guide teachers as to what and how much additional work students need to reach competency. The belief gap study, conducted by researchers from the University of California, Irvine and Texas A&M University, looked at the effects of both the assessment data and the professional development (PD) around it on teachers’ perceptions of student ability.
The A2i study took place in an unnamed district in northern Florida in the 2008–09 school year, in five elementary schools ranging from urban to rural in setting. The belief gap researchers focused on a subset of the participants—twenty-eight teachers and 446 of their first-grade students. Students were representative of the district community: 84 percent were White, 6 percent were multiracial, 5 percent were Black, 3 percent were Hispanic, 2 percent were Asian, and 0.7 percent were Native American. Approximately 46 percent of the students were boys, and 27 percent of the students qualified for the National School Lunch Program (NSLP). All teachers were female, with an average of seventeen years of teaching experience. One teacher identified as Black; the rest identified as White. Fifteen teachers and their 255 students were randomly assigned into the A2i treatment group, while thirteen teachers and their 214 students were assigned to the “control” group.
It’s important to note that, due to the construct of the main A2i study, a pure control group was not possible for the belief gap analysis. Both groups of teachers received the same amount of PD regarding research-based teaching, but the focus varied between groups. The treatment group was focused on why and how to use A2i assessment data to tailor their instruction; the control group received more generalized PD on the potential value of any assessment-guided instruction. Teachers in the control group delivered business-as-usual instruction during their literacy block and implemented a research-based intervention called Math PALS for their mathematics class periods. They received infrequent assessment data for their students but were not asked to tailor their instruction based on that data. The treatment group teachers used Math PALS, too, but utilized the frequent, dynamic assessment feedback from A2i to guide and shape their literacy instruction.
At the midpoint of the school year, teachers completed the Social Skills Rating System (SSRS) for all of the first graders in the study. SSRS is a norm-referenced, multirater assessment tool comprised of fifty-seven items in three measurement areas, academic competence, problem behaviors, and social skills. The researchers hypothesized that teachers using the frequent assessment feedback from A2i for the first half of the year (and exposed to the A2i-specific PD) would produce more accurate predictions of student competence than their control group peers, and that potential biases in predictions based on student characteristics would be minimized.
Generally, this hypothesis proved correct. Teachers in the treatment group provided a more accurate rating of their students’ academic competence than their control group peers by choosing ratings that agreed with student test scores. Control group teachers—those without access to the A2i assessment data—generally rated the overall academic competence of their students lower, and rated students who qualified for the NSLP as less academically-competent than more affluent students. The strength of this effect varied based on the percentage of NLSP students attending a given school. The fewer the number of NSLP students in the school, the lower control group teachers’ ratings of those students were. Interestingly, teachers’ perception of students’ social skills and behavior problems appeared impervious to the treatment. Teachers in both groups who rated students’ behavior or social skills as poor also predicted lower academic competence for those students.
Students in the A2i classrooms achieved greater gains in test scores between fall and spring than students in the control classrooms, which likely speaks more to the primary study of A2i’s effectiveness. However, teacher ratings of academic competence were positively and significantly correlated to higher test scores in both literacy and math. For example, for every one-point increase in a teacher’s rating of academic competence, their student’s score on reading comprehension increased by 0.24 points. Thus, while it would be something of a leap to assert that a high competency rating directly results in higher test scores, there is clearly an interaction.
To the extent that teacher ratings are influenced by student and classroom characteristics unrelated to their actual performance—often negatively—any successful effort to mitigate that influence should yield positive outcomes for students. Teachers participating in PD on data-driven personalized instruction were significantly more accurate in their competency judgments regardless of socioeconomic status and other non-academic characteristics. Filtering out the noise is a great first step to eliminating the belief gap.
SOURCE: Brandy Gatlin-Nash, et. al., “Using Assessment to Improve the Accuracy of Teachers’ Perceptions of Students’ Academic Competence,” The Elementary School Journal (June 2021).