About two-and-a-half years ago, The Middlebrow published a brief review of The Belan Deck by Matt Bucher. It’s a novella about a corporate communications executive who is writing a slide presentation about an emerging artificial intelligence project. We join the narrator on their way home from San Francisco to Austin. The slideshow has sprawled to more than 100 entries and our marketer is caught up in humanistic, rather than corporate concerns.
In real life, The Middlebrow works in corporate communications, for a company that has something to do with something we like to call “The AI Ecosystem” (we “enable” it, or “help enable” it, depending on the day), so the human issues become very interesting. The corporate issues are more obvious. AI can be very efficient, even when it needs humans to guide it and check its outputs. For all their deficiencies, AI programs work quickly, without complaint or hang ups and they will do the same jobs forever without asking for a raise or promotion. You do have to feed them massive amounts of energy, which can be costly, but there’s some belief out there that through a combination of streamlining operations and finding new ways to generate and transmit electricity, that AI will solve that problem for us.
AI isn’t perfect, of course. It gets things wrong. But the same is true of people, who are harder to deal with. When people make mistakes, you have to figure out a way to tell them about it while motivating them to redo the work, which can take time. What a mess. An AI will just redo the work. It might take longer to guide it towards a better output than it will take the program to execute the instructions. This is automation. Nothing new.
Like a lot of people who enjoy writing and making some sort of twisted living at it (believe me, corporate communications was not the driving ambition) the thought of AI taking my job alarmed me from the start. Though, to be honest, I have worked with journalists who have been talking seriously about this possibility since the early 1990s and some basic forms of journalism, like writing up corporate earnings each quarter, have been automated for many years. But that’s really template-based work. AI does a lot more than that.
What really freaked me out was the first time I heard an executive say, “AI is good because it reduces cognitive load for our employees.” That phrase really scares me. When you develop a euphemism for “thinking,” you are headed down an anti-humanist path because the act of thought is the human condition. Aristotle will tell you so. Plato will tell you so. The Pre-Socratics will tell you so. Early humans scrawling on the walls of Lascaux after a hunt will tell you so.
“Relax, relax,” the executive would likely say to me, “I just mean that you can offload the tedium, so you can spend your time dealing with the complex and interesting and high-value…” But this isn’t how thinking works, is it? Reasoning is a practice. Creativity is a practice. One thing I hear a lot is, “I don’t use AI to write for me, I use it to get past the blank page. It generates a first draft and then I do the real work.” Well, to me, the first draft is real work. To skip the blank page is like having a bumper sticker that says “My car climbed Mt. Whatever” and telling people it makes you a mountain climber.
Cognitive load makes your brain stronger. It’s important. Simple tasks lead to more complex ones. Big ideas are built on smaller, fundamental ideas. Knowing where your information comes from, how it is sourced and created, matters. “Gemini told me so,” lacks epistemic value. I’ve mentioned it here before but in the Wired feature about life at the AI company Anthropic, you will find the disturbing detail of a worker who was admonished for making their own slides for a meeting when the company’s wunderprogram would have presumably done a better job. It’s a short distance from “Here is a tool you can use,” to “You must use this tool.” It’s also probably a short distance, when the tool in questions performs a task that resembles (but is not) thinking, between “use the tool for work,” and “work for the tool.” I doubt we are far off from humans essentially reporting to AI programs. It’s probably already happening in unacknowledged ways. For example, if an AI determines the work schedules for a large number of hourly retail employees, essentially telling them when to show up, take breaks and leave, all while determining how much money they will make, what entity is really in charge?
Which brings us to Bucher’s new novel, The Summer Layoff, where we rejoin our hero, now separated from his employer. With ample severance and time, he keeps a daily diary of his reading, thinking and walking throughout an Austin summer. It involves getting lost in the desert, getting lost in the Internet and getting lost in thought. Written in aphorisms and observations including trivia “The average person is related to 5,000 people on their mother’s side and 7,000 people on their father’s side,” and reflections about work and identity.
The Summer Layoff celebrates human thought and the kind of musing that AI could never perform, much less enjoy. There’s a bit where the narrator recalls suggesting that every month or so, employees at Belan should have a day free from tasks, to think about how they approach their jobs and lives, to recharge creatively and find new approaches. The founder replies that his employees should do such work on their days off.
Once free from the “cognitive load” of work at Belan, our hero finds cognitive freedom to enjoy and explore ideas that lack economic utility. It’s interesting that Bucher’s latest hits now, alongside research from MIT that shows there is a cost to working with and alongside AI. Shedding cognitive load, it seems, can cause cognitive debt. Our brains, it seems, can calcify and atrophy if robbed of free range play.
Be warned.