Over at New Pop Lit, editor and provocateur Karl Wenclas asks about artificial intelligence:
“What are the real pros and cons of A.I. technology applied to writing and literature? What’s the best strategy to follow: to embrace the technology, or find ways to defend against it?”
Karl rightly worries that people who write and read books will face the advance of AI services like ChatGPT with the same combination of willful ignorance and outright complaining that the literati employed when movies, television and the internet emerged to compete with books for people’s time and attention. Wenclas believes a showdown is looming between techno-evangelists who will support AI everywhere and, “At the other extreme are writers and literary critics who can’t conceive of any change to the refined literary art they know and love. Their essays overflow bemoaning the dwindling status of “serious reading,” as they look back fondly at past ‘avant-garde’ innovators such as Virginia Woolf, now safely dead.”
If you haven’t tried ChatGPT, The Middlebrow thinks you should. There are no points to be won here for ignorance. This really is an evolutionary step for online search, which generates a coherent answer to a query, rather than a list of sources. If Google is a massive library card catalogue, ChatGPT is more like an encyclopedia that generates articles for you, as you need them.
If I ask Google, “Is inflation a monetary phenomenon?” it responds with nearly 6 million avenues for exploration, including papers from the Federal Reserve Bank f San Francisco, the Hoover Institute, the Cato Institute and the Heritage Foundation. If I ask ChatGPT, it tells me that though some people disagree or have other interpretations, that inflation is, in fact, a monetary phenomenon, which is the consensus of online sources. Google is flawed because it makes The Middlebrow dig for dissenting views. ChatGPT is worse because it waves a digital hand at them.
As for the writing? It’s not vivid or creative but it’s competent and clear, which puts it ahead of many humans, including well-educated and thoughtful humans who just don’t consider writing a lively art. Competent and clear is a big step forward, given the kinds of written material that we all have to read in our daily lives, including important information sent to us by our government, employers and service providers like insurers.
While I would not want to read a new translation of The Odyssey by ChatGPT, it might well do a better job explaining to me how to troubleshoot my refrigerator than most people do. Maybe it can also read an X-ray and describe the results more successfully, too. Maybe it can succinctly explain the policy differences between two political candidates. It can do a lot.
ChatGPT will do a lot, or some similar services or technology will. It works too well to be ignored. There is a danger here, and The Middlebrow believes it will be studied and explored in-depth — there is no way for ChatGPT to give answers to social, economic and political questions without, at some level, staking out a point of view. More often than not, as it did with the inflation question, it will live squarely within conventional wisdom, with a few caveats.
In that sense, the Google search is sometimes preferable. If your goal as a researcher is to question the conventional and come up with something surprising, then you want the chaos of many views rather than to have your answers synthesized by algorithm. It’s certainly the case, and very worrisome, that most people will accept ChatGPT’s answers as “final” and so what it has to say about controversial subjects will be hugely controversial. The government and corporations will likely be very interested in what ChatGPT has to say about national security, law enforcement, public health, race, religion, class or other topics where those in power might have preferred narratives.
OpenAI encoded protections against racism into ChatGPT, so it won’t answer if you ask it if we should torture Syrians. But one researcher proved it isn’t so hard to get to the underbelly of racism that ChatGPT accesses by mindlessly trawling the internet to summarize mass opinion:
Professors and teachers, aided by technology, will also have to look out for academic fraud. The purpose of assigning people to write essays is because the exercise helps teach students how to reason and to organize information. Students who rely too heavily on this service will have reduced the exercise to one of pure regurgitation.
Finally, we have to wonder where this leaves fiction. The Middlebrow’s first thought is that ChatGPT could be a character in many stories, the same way the computer of the U.S.S. Enterprise has been a character in Star Trek since the 1960s.
There must also be creative possibilities here like using highly structured prompts to get ChatGPT to respond in interesting ways for comic, dramatic, ironic or just uncanny effect. Some writers may use prompts to generate short and long form fiction. This might be interesting, especially if the collaboration is disclosed and maybe the prompts are only made known to the reader after the text has been read?
It won’t do just to say “no,” and to pretend this has never happened. Whether we are ruled by AI or it is ruled by us will depend largely on how facile we become with the technology — that’s how we will keep artificial intelligence from becoming an artificial authority.