If you’ve been on social media lately, you’ve doubtless encountered fictional stories and essays generated by “ChatGPT,” an artificial intelligence program that can generate remarkably solid pieces of prose in response to prompts both serious and whimsical, and can do so instantly and in any imaginable style.
Some find this thrilling. Others, mostly writers and teachers, are filled with existential dread. “My life—and the lives of thousands of other teachers and professors, tutors and administrators—is about to drastically change,” wrote English teacher Daniel Herman in The Atlantic.
My hunch is that’s a lot less true than he thinks. For starters, let’s immediately dispense with the idea that artificial intelligence will make writing instruction obsolete. As Herman put it, “It’s no longer obvious to me that my teenagers actually will need to develop this basic skill.” Remember the “twenty-first-century skills” movement? There was breathless insistence that, with all the world’s knowledge now in our pockets and mere keystrokes away, K–12 education should prioritize critical thinking, problem solving, creativity, and communication. The question on the lips of education’s smart set back then was, “Why cram kids’ heads with a bunch o’ facts when we have Google?” Similarly, why teach writing when “AI” can generate sophisticated text at the touch of a button?
E.D. Hirsch, Jr. nailed the answer twenty years ago. “The Internet has placed a wealth of information at our fingertips. But to be able to use that information—to absorb it, to add to our knowledge—we must already possess a storehouse of knowledge,” he wrote. “That is the paradox disclosed by cognitive research.” More recently, University of Virginia professor Dan Willingham wrote that research in cognitive science has shown that “the sorts of skills that teachers want for students—such as the ability to analyze and to think critically—require extensive factual knowledge.”
In other words, it takes knowledge to communicate knowledge—or even to have the discernment to judge whether an AI-generated piece of text makes sense or sufficiently responds to a prompt. Herman writes that what GPT can produce right now “is better than the large majority of writing seen by your average teacher of professor.” Perhaps so, but this is a problem unique to the education equivalent of the “worried well” and their teachers, not the far larger number of students for whom constructing a tolerable five-paragraph essay is a daunting challenge. A tiny minority of American high school students are like Herman’s, discussing and analyzing “Anzaldúa’s radical ideas about transcending binaries, or Ishmael’s metaphysics in Moby-Dick.”
Skilled teachers who know their students have seldom failed to notice when a turned-in assignment has the thumbprint of a little extra help from home or simply doesn’t sound like original work. It will be no different with AI. Another English teacher, Peter Greene, whose rural Pennsylvania students are far more typical than those in an elite prep school, noted in Forbes that ChatGPT covers gaps in its knowledge by making things up and embellishing. “In other words, it has an eerily human capacity for bullshitting its way around gaps in its data base.” Teachers who know their students will not easily be fooled. Greene makes the excellent suggestion that teachers try out their writing assignments on the chatbot. “If it can come up with an essay that you would consider a good piece of work, then that prompt should be refined, reworked, or simply scrapped.”
Concerns that AI makes writing instruction obsolete are manifestations of the “Curse of Knowledge,” an idea popularized by Chip Heath and Dan Heath in their 2007 book Made to Stick. The curse of knowledge is “a cognitive bias that occurs when an individual, communicating with other individuals, unknowingly assumes that the others have the background to understand.” It’s a common problem in education: To the well-educated and language proficient, problem solving, critical thinking—and clear, sophisticated written analysis—all feel like “skills” that can be practiced and mastered (or plausibly faked via artificial intelligence) because they are already rich in knowledge and sophisticated language. Herman worries that AI will make it easy for students to avoid “doing the hard work of actual learning,” but there’s no reason to think the average high school student could even read the machine-generated essays ChatGPT creates, let alone plausibly pass one off as their own work.
In sum, the threat of ChatGPT is not that it will make writing instruction obsolete. It’s the assumption that it will make writing instruction obsolete that we should be on guard against. The world of futurists, technology enthusiasts, and educated elites is simply not the same as the world occupied by the substantial majority of students struggling to reach basic levels of language proficiency. Artificial intelligence will provide time-saving tools for knowledge-haves, but if we make the same mistake made by promoters of “twenty-first-century skills,” it will be fatal to the interests of knowledge have-nots, who will not be given the opportunity to develop the knowledge base the well-educated take for granted, and which make those tools useful.