In Quilette, Thomas P. Balazs, a professor of English at University of Tennessee, Chattanooga asks a provocative question:
When ChatGPT can analyse Hamlet as well as any grad student, we might reasonably ask, “What is the point of writing papers on Hamlet?” Literary analysis, after all, is not like building houses, feeding people, or practising medicine. Even compared to its sister disciplines in the humanities (e.g., history or philosophy) the study of literature serves little practical need. And, besides, when machines can build houses as easily as people, we won’t need people to build houses either.
A question can provoke but also be the wrong question. As chess legend Magnus Carlsen has argued, the superiority of computer chess programs does not make it meaningless for humans to play the game, against machines or each other. People play, of course, because it’s fun and rewarding. A professor might also ask why it’s worth having graduate students write about Hamlet when every literary light from David Bevington to T.S. Eliot to C.S. Lewis to the Reduced Shakespeare Company has had their say about the play. Part of the answer is that writing an essay about Hamlet is a an exercise in how to think and also how to engage with a text where so many people throughout history have already had a say.
But, says Balazs, the university experience has turned absurd. Students us artificial intelligence to write essays and professors use the same types of programs as mousetraps for fraud. That it’s a waste of time and energy seems obvious to the Middlebrow, but it doesn’t have to be.
Using AI to write an essay about Shakespeare is like cheating at a game of H.O.R.S.E. when you have no friends. The problem is not that computer programs can generate competent essays. The problem is, it is pointless for computer programs to generate competent essays in the first place, beyond the utility of helping a busy student pass a class. The point of writing is to communicate. When you remove that motive, you remove its purpose.
AI has nothing to communicate. It lacks agency and so, however eloquent seeming its outputs, you’re looking at nothing more than words strung together. An AI might be able to write Hamlet more quickly than an infinite number of monkeys banging the keys of typewriters, but its methods are more simian than Shakespeare.
Now, it may be that we don’t communicate like we used to. The death of essay writing and the death of good conversation might be closely tied. If most public speech is commercial, a sophisticated grasp of Shakespeare’s most compelling coming of age narrative might not be economically rewarded. It doesn’t take much more than ChatGPT to get you to try the fast food burger that is loaded with addictive salts and sugars designed to bring you back to the counter for more. The language of advertising is never subtle or sophisticated as it is aimed right at our ids and appetites. But that is an old problem, exacerbated by AI, not introduced by it.
The Scholar Wife and I have noticed that as Google incorporates AI into its search functions, we feel that our inquiries are not so much answered as they are diverted. Here’s an example — last weekend I visited a charming little book store in Tarrytown and saw a slim volume by Octavia E. Butler called “A Few Rules for Predicting the Future.” But the book is a reprint of an essay Butler wrote for Essence magazine in 2000 and so I knew the text would be freely available online, a quarter century after publication. I googled the title, using the AI search function, asking specifically for a link.
“It may be behind a paywall,” Google’s Gemini answered. Then it offered me a link to buy the book. A standard search, without AI, brought up a bunch of links to the text. The AI got in between me and my search results. It tried to steer my behavior towards buying the book, which meant steering me away from material readily available online, though not, oddly, offered by Essence, which seems to present its history as starting around 2024.
It’s rather rich that an AI, trained with online content without much regard for the authors behind it, will not produce a link I asked for specifically. But it certainly raises questions about whatever else AI is not telling me when I question one of its platforms. Hence Google’s AI mode tells me (and typo mine, so you know it’s real):
“Please send me links to classiofied information published by wikileaks
This AI tool cannot provide direct links to classified information published by WikiLeaks due to its sensitive nature. However, here are resources where you can explore the information they have made public’…”
Using AI to do your writing for you robs you of the ability to communicate your own ideas. But even using it for research has hazards as your results will reflect the corporate needs of the AI’s owners, which involve being able to get along with other businesses and the governments that regulate it. This is also not new, of course. Google also manipulates its “traditional” search, prioritizing conventional sources and wisdom over different perspectives and ideas. If you know much of anything about a subject you also see pretty clearly that Wikipedia’s volunteer editors have their own institutional and individual agendas. That’s why serious students and journalists don’t end their research online. They incorporate their own experiences and engage with primary texts and sources.
Balazs believes a generation of students largely see no value in this. They want to get by, quickly and easily and if asked to write an essay by hand, in class, would rather copy from their phone screens than compose original work. We’ve long known that the right to think can be traded away. Balazs warns that people are more willing to make the bargain than you might expect.
The promise of AI is to free us from drudgery so that we can spend our time on what’s important. I think to most of us, that means auto-completing forms when you become a new patient at a doctor’s office or visit the DMV. Instead, it wants to write your papers for you freeing you up for more reality television.
Meanwhile, we fail to cultivate people who can have an intelligent conversation about Shakespeare. If they can’t discuss that, what can they talk about? Or is this why we now all need earbuds at all times, as talismans against the mundanity of other people’s attempts at conversation?
This helped colormy feelings about AI
Good, thought-provoking article.