This year’s Wonkathon is over, and the results are in!
2023’s Wisest Wonks:
- Alex Spurrier and Amy Chen Kulesa for "Reimagined: AI can be a catalyst for a more learner-centered system of education, if done right."
Second place:
- Khaled Ismail for "Accountability will be the bedrock."
Third place:
- Jeremy Roschelle for "Harnessing powerful AI while mitigating risks: It’s about the data!"
Thank you to all of our participants! Read about the competition, as well as all eleven entries, below.
—
Innovators such as Elon Musk have described AI as the single most powerful tool that humanity has ever invented. Even if we don’t accept so stark a description, it is undoubtedly a technological development on par with the smartphone, search engine, or modern computer. As those technologies did, we must expect AI to alter, enhance, and disrupt every sector.
It’s already doing that with K–12 education. AI-powered software already exists to help teachers with planning lessons, differentiating instruction, and providing student feedback. Districts are using it for administrative tasks such as rezoning and scheduling. Khan academy added an AI chatbot that acts as an individualized tutor for every student using the software. Nearing science fiction, districts are experimenting with AI-powered robots for school safety, and some Chinese students wear headsets that send biometric information to teachers who can then track who’s paying attention and who’s busy daydreaming. (Others may also be tracking them.)
Proponents focus on two major promises of AI: first, personalization that will boost student learning; and second, AI’s capacity to quickly accomplish mundane tasks and thereby free up administrators, teachers, and other district personnel to focus on high-impact activities, such as parent communication, one-on-one instruction, and college and career counseling.
Yet education is also awash in well-founded concerns about AI: questions about cheating, student privacy, human versus automated decision making, teacher-student relationships, and equitable use, along with anxieties about whether it renders obslete conventional knowledge and skills routinely taught in K–12 education.
Innumerable questions beg for answers, but at this time, for this year’s Wonkathon, we’re focusing on an overriding central issue: How can we harness the power but mitigate the risks of artificial intelligence in our schools? While the following guiding questions are not exhaustive, responses to them may provide some of the direction and answers that teachers, administrators, and district leaders need.
- Are the current ways in which AI is being implemented proving more effective, less effective, or making no difference in American schools?
- What emperical evidence do we have so far that new tools are succeeding?
- What will need to happen to incentivize the evaluation of new AI tools?
- How can we implement accountability to ensure efficacy? Should we focus on evaluating the vendor, the tool, the district using the tool, the teacher, or somebody or something else?
- How are districts implementing AI, not just to augment learning tasks, but also administrative tasks?
* Be mindful that we are not asking for specific pitches or marketing materials from individual vendors that are utilizing AI, but rather objective descriptions of how AI technology is being used and its subsequent empirical impact.
WHAT’S A WONKATHON?
For nine years now, we at the Thomas B. Fordham Institute have hosted an annual Wonkathon on our Flypaper blog to generate substantive conversation around key issues in education reform. Last year’s forum focused how states can remove policy barriers that are keeping educators from reinventing high schools.
As in years past, we’ll encourage our audience to vote for the “wisest wonk,” an honor previously conferred on such luminaries as Keri Ingraham, Angela Jerabek, Abby Javurek, Jessica Shopoff, Chase Eskelsen, Christy Wolfe, Seth Rau, Joe Siedlecki, McKenzie Snow, Claire Voorhees, Adam Peshek, and Patricia Levesque.
If you’re keen to jump in—and we hope you are—please let us know and indicate when we can expect your draft. We will publish submissions on a rolling basis, so send yours as soon as it’s practical for you, but no later than Monday, November 6. Aim for between 800 and 1200 words. Send your essay to Brandon Wright, Fordham’s Editorial Director, at [email protected], as soon as it’s ready. And please be sure to answer the fundamental question: How can we harness the power but mitigate the risks of artificial intelligence in our schools?
Let Brandon know if you have any questions. Otherwise, may the wisest wonk win!
THIS YEAR'S SUBMISSIONS:
- Accountability will be the bedrock of AI in education, by Khaled Ismail
- An overlooked application for AI: The city as a Montessori shelf, by Travis Pillow
- From chalkboards to chatbots: Ethically embracing AI in education, by Jen Stauffer
- Harnessing powerful AI while mitigating risks: It’s about the data!, by Jeremy Roschelle
- How we can use AI to increase access and equity in science education, by Melissa Peplinksi and Haley Gaudreau
- Mitigating the risks of AI in today’s schools: A new taxonomy for the information age, by Beth-Ann Tek, Ph.D.
- Reimagined: AI can be a catalyst for a more learner-centered system of education, if done right, by Alex Spurrier and Amy Chen Kulesa
- Robot teachers, by Anonymous
- Transformative for the motivated and mere meh for the unmotivated: How AI will and won’t affect learners, by Sean Geraghty and Mike Goldstein
- Using artificial intelligence to measure the effectiveness of professional learning, by Annie Morrison
- Why AI doesn’t worry me in the classroom, and why it does, by Thomas Courtney