Editor’s note: This essay is an entry in Fordham’s 2023 Wonkathon, which asked contributors to answer this question: “How can we harness the power but mitigate the risks of artificial intelligence in our schools?” Learn more.
Having spent years in both university and school district administration roles, I’ve seen first hand how difficult it is to implement new, innovative ideas that have the potential to serve kids, teachers, and parents with fidelity. Artificial intelligence (AI) use in schools is going to be a complex effort that involves responsibility from various stakeholders, including vendors, school districts, teachers, students, and the broader educational community. It is going to require systems of accountability to be in place that are more agile, transparent, iterative, and responsive as the technology evolves and we learn more about its use cases, enjoy its benefits, and minimize its harms.
Here is my roadmap for how to ensure that, across a school system, anyone responsible for developing, supplying, procuring, implementing, and evaluating AI solutions is accountable for delivering what is best for students.
Start with the vendors
School systems have a lot of power through their procurement processes to ensure certain guardrails, parameters, and expectations are clearly articulated and adhered to by any vendors providing products or services. And while sometimes it may feel like procurement is where all exciting and innovative ideas go to die, a well developed RFP and procurement process that is anchored in a clear and coherent vision for what a district or school is looking for can be the first step in establishing strong expectations.
School systems should establish expectations from the outset by probing vendors to ensure their track record and core values are clear; there are clearly articulated and defined ethical considerations and data privacy policies in place; information about how AI algorithms work and what data is used to train, improve, and support the AI are transparent; and no ambiguity about how student data or information will be used. I’ve heard some call data “the new oil,” and districts have a legal and ethical obligation to protect students’ data and information.
Align use with developmental and pedagogical purposes
Beyond vendor selection, AI tools must continuously demonstrate and prove their efficacy and alignment to evidence-based, age-appropriate, and pedagogically relevant practices. The age of the students, their learning needs, and the specific educational context should all inform the choice and deployment of AI tools. Tools that are out of step with the age or learning objectives of the students will undermine the educational process and do not have a place in schools.
Some might argue that we’ve seen many tools, resources, materials, and technology used in classrooms that have proven little to no efficacy, alignment to evidence-based practice, or improved outcomes, so why are we over scrutinizing AI? The reality is, we should be scrutinizing and holding to account all of those other random things cluttering our schools and classrooms in the same way too.
Craft good policy
The days of no phones, no laptops, “no looking things up” are over. Policies that over-prescribe limits on the use of technology or uptake of new innovations are out of touch. Our school systems and classrooms should support, prepare, and incentivize educators and young people to tap into the technological resources they have at their disposal as they learn to critically think, read, understand and apply mathematics, and collaborate with their peers.
School districts must establish clear policies and guidelines that govern the use of AI within their systems to ensure classrooms are safe and productive learning environments. They must ensure AI policies and use are evaluated regularly and adjusted to reflect the evolving needs of students and advancements in technology. What we know today about AI will be drastically different five years from now, and policies should allow room for change, innovation, iteration, and learning while maintaining alignment to educational objectives and ethical standards.
Develop your people
Supporting teachers to understand and leverage the power of AI, integrate AI into their teaching methods, and adapt their pedagogy to accommodate these digital tools will be critical. AI can provide teachers with meaningful insights and identify areas where students need more support, which is desperately needed as we continue grappling with the effects of the pandemic on student learning.
But as AI tools advance to be able to surface needs and identify targeted supports, school leaders, districts, and policies should ensure that teachers’ professional judgment and triangulated data sources remain critical components in any intervention decisions. Humans must continue to make the final educational decisions that directly shape and impact students’ learning and continue to use professional judgment to ensure students are being served equitably and fairly.
AI tools must serve to augment, rather than replace, teachers, and should be used to enhance, not diminish, the role of teachers. Who doesn’t want to make the life of a teacher a little easier? Can we make teachers’ lives easier and, in the process, support them in personalizing learning to guide students towards mastery of grade-level content, critical thinking, and technological savvy? Sign me up!
Get to know and build skeptical trust in AI recommendations
Educators should also understand how AI systems make decisions and recommendations, particularly when it comes to recommendations about interventions or supports that target resources to specific groups of students. When users can’t grasp the basis of AI recommendations, the trust in these systems diminishes. Some call this the “black box,” where AI systems make decisions that seem arbitrary or inexplicable. AI tools should provide clear explanations of their reasoning, allowing users to have confidence in the recommendations they receive, and accountability policies should put in place checks and balances to ensure that no AI recommendations are implemented without scrutiny.
Involve students and parents
Parents and students should be at the table helping shape decisions for how AI is used at school and at home. They should be active participants in decision-making processes, informed about how AI is being used in the classroom, and encouraged to provide feedback and voice their concerns. Parents and students can become advocates for responsible AI use in schools and contribute to a safer and more effective learning environment if and when given the opportunity to do so. They can also play a critical role in ensuring accountability when things are not progressing positively or change is needed.
Pay attention to bias
At its core, accountability in AI use in schools must include a focus on values, ethics, and fairness. AI systems should be designed to be fair, unbiased, and free from discrimination. They should not perpetuate or exacerbate existing inequalities. Schools and districts must ensure AI tools are trained using representative, diverse, and vetted samples with safeguards against algorithmic bias, stereotyping, and the potential for discrimination in the use of AI. When an AI tool demonstrates bias, accountability policies should be in place to retrain and reprogram the tool to ensure it does not happen again.
Assess impact
No one fully understood how smartphones or social media would transform every aspect of our life in the span of fifteen years. AI is a dynamic field, and its impact on education is beyond what any of us could probably comprehend today. The only way we can keep up is by building strong guardrails and regularly assessing and evaluating the extent to which AI tools are enhancing educational outcomes. We must also constantly anticipate and respond to unintended consequences as they emerge. This should include information from academic assessments, surveys, and feedback from teachers and students. The data collected should be used to refine AI implementation strategies and inform policy decisions.
Let’s get this right.