The claims from the field of education technology—“ed tech” to insiders—could hardly be more grandiose. In 2013, columnist Thomas Friedman said that “nothing has the potential to lift more people out of poverty” than massive open online courses (MOOCs). Mark Zuckerberg has suggested that his computer-based learning program would launch mediocre students into the 98th percentile through personalized learning. Today, as generative AI programs are leaving us gob smacked with their creative capabilities, Khan Academy’s Sal Khan says that chatbots will soon provide “a personal AI tutor for every student.”
So far, none of this has come to pass. And a new article by Matt Barnum at Chalkbeat dissecting the disappointing results of Summit Learning, an online personalized learning platform developed by Zuckerberg’s philanthropic venture, ought to give ed tech believers pause. After titanic investments in this software, the best evidence of effectiveness that Summit’s promoters can summon is that some schools have had good experiences with it and surveys have shown that school leaders view it favorably.
The article points to the scale of the ed tech letdown: The Chan-Zuckerberg Initiative (CZI) has poured more than a hundred million dollars into Summit Learning, which is meant to leverage technology to individualize a student’s education, offering them personalized assignments, faster feedback, and access to features such as digital planners. Now, not only is there no evidence of the revolutionary effects of the program Zuckerberg suggested, but there are apparently no rigorous studies even showing modest effects on outcomes like student numeracy and literacy. Barnum notes that fewer schools are using the program now than used it just a few years ago.
Yet education technologies like Summit have not just failed to live up to their hype, they’ve often changed very little about schools at all. Arguably, the greatest impact on learning from ed tech in recent years has been to offer low-quality computer-based courses that allow failing students to “recover credit,” a development that seems to have more often led to scandals and lowered standards than to any improved learning.
Sometimes, it seems like those who design and promote these new technologies have not considered for even a moment how or why students would choose to use them. From the decline of the MOOC to the lackluster effects of personalized learning, what ed tech disappointments have in common is that their successes were premised on the idea that all students were missing was the right “education delivery mechanism.” In this “delivery model,” education is thought of as a good to be distributed rather than something each student must earn for herself, and it is pervasive. Yet schools cannot “deliver” algebra to a student any more than Jenny Craig can deliver six-pack abs to a customer. The obstacle to learning is often not a lack of resources but a lack of uptake on the part of the students.
Yet educationists perennially ignore student agency and effort. When considered from the student’s point of view, it is easy to see that many education technologies will indeed be a boon to learning, but the largest gains will go to students who are motivated to use them. To get them motivated, they need strong intrinsic drives, peers who push them to work harder, a family that incentivizes academic performance, an inspiring teacher, or some shorter-term stakes to their academic effort. The lattermost is a practical mechanism that education policy can craft, yet it is too rarely discussed within the field.
In 2023, as generative AI is on the rise, we are beginning to hear a new round of ed tech hype. John Bailey argues in Education Next that, in contrast to earlier waves of technology that ultimately disappointed, this time is different (emphasis mine):
While past technologies have not lived up to hyped expectations, AI is not merely a continuation of the past; it is a leap into a new era of machine intelligence that we are only beginning to grasp. While the immediate implementation of these systems is imperfect, the swift pace of improvement holds promising prospects. The responsibility rests with human intervention—with educators, policymakers, and parents to incorporate this technology thoughtfully in a manner that optimally benefits teachers and learners.
Bailey gets this mostly right—and generative AI truly is a startlingly powerful emerging technology—but the ultimate responsibility for learning resides with the learners themselves. Otherwise, those students will do what they have done with every other technology that has made its way to their screen: use it to have fun instead of doing the hard work of challenging and investing in themselves. If we want students to learn more, those educators, policymakers, and parents that Bailey mentions need to be laser-focused on why students should choose to spend their time studying when many more entertaining options are available to them. And that’s true whether the ed tech in question is a genius chatbot or simply a public library.