9 Reasons Why You Should Not Hope for AI to Replace Literary Agents
No one loves gatekeepers, but be careful what you wish for
It’s tempting to believe that, since language models (“AIs”) can now read millions of words per minute, we have outlived the need for literary agents, the unappreciated gatekeepers who hold aloft the publishing firmament. We should be careful what we wish for.
Language models are biased by pedigree. With your favorite AI, perform an experiment. In two separate runs, have it evaluate a passage of short fiction. In one, claim the submitter’s an Iowa MFA; in the other, say he’s a high-school dropout. You will get different results. If such systems were deployed, they would limit institutional support and voice to an in-crowd selected by socioeconomic circumstance rather than literary promise. We should count ourselves lucky that this is not the case.
Machine learning models prefer snap judgments over deep comprehension. Train a neural network on pictures of cats taken on sunny days, and dogs taken on cloudy ones, and it will “learn” to classify an image based on the sky color rather than the animal. AI prefers simple explanations—if the upper-left pixel is blue, it’s a cat. It makes lazy snap judgments. If we allowed AI to read slush piles, it might reject 95 percent of submissions before finishing the first page, not because of any real fault in the writing, but an adverse post-activation wavelet distortion—in less technical terms, bad vibes. I don’t love it.
Aspiring authors will need to spend thousands of dollars to learn the tastes of AI. AIs do not read for pleasure. Many have not done so for twenty years. Using AI to filter slush will create an exclusionary etiquette that most authors will need a paid expert’s help to navigate. Authors will be forced to focus on pitching—that is, exploiting biases—rather than writing.
AI will slow down the process. AI reading is not efficient—every word costs about a million floating-point operations (or FLOPs.) Think of a “FLOP” as an 8-digit multiplication, and you won’t be far off. How long would it take you to do a million 8-digit multiplications? You see my point. Put AI in charge of slush triage, and you’ll see authors waiting for months to hear back about submissions. Worse yet, because AI is literally soulless, most authors will receive inscrutable, formulaic negative feedback, as opposed to the fair and incisive assessments to which authors are accustomed now.
Using AI may prevent distinctive work from being heard. Neural networks function like committees of smaller, less-capable subnetworks. Committee consensus favors mediocrity. Distinctive voices will be shut out. This will plunge us into a world where publishing produces nothing but a slightly duller version of what was sayable yesteryear.
AIs aren’t writers. Insider novels, by trade executives whose contacts enable them to retire as novelists, are notoriously not-excellent. AIs, by virtue of living in data centers 24/7, are the ultimate insiders. The creativity they seem to possess often turns out to be regurgitated training data. Do we really want them deciding which books the reading public should be allowed to read? No.
AIs will create dysfunctional processes, making exceptions (personal favors) necessary to get anything done. The great thing about traditional publishing is that everything happens smoothly and on time. Decisions are made fairly and speedily. Personal favoritism is never a factor. Language models, on the other hand, are stochastic—thus, prone to the imprecision-of-thought that, in an organizational setting, creates dysfunctional processes and barriers. We can be thankful that no such things exist in publishing, due to the human touch. If they did, the failure of official channels and induced reliance on personal favor would create a culture of submissiveness, preventing the industry from achieving the diversity, efficacy, and vibrancy for which it is universally applauded.
The proliferation of AI-based decision making will lead to a zero-sum writing culture. One good thing that can be said about writers is that they have no issues with jealousy or comparison. Ask a published author you meet at a bus stop for an introduction to his literary agent, and you will surely get one. In fact, he’ll dial her up and start a three-way call right there. Also, trivial slights never evolve into career-arresting grudges. That simply doesn’t happen in publishing. But all this would change if the industry became more, for lack of a better word… numerical. Imagine that Random House submits your manuscript to SudoGrade and you get a 99. Great! But that 99 means nothing unless someone else gets a 98. And did some jerk out there get 100? You’ll find this hard to believe, but this could keep authors awake at night. It’s best we not open this can of worms. Instead we must protect the pay-it-forward, everyone-helps-everyone culture of publishing that exists now.
It’s called “traditional publishing” because what’s traditional works. It was not an AI who signed Kafka. Do you think GPT-DifferenceEngine, with its sixty-six parameters stored in bronze cylinders, could have appreciated The Metamorphosis? No, the man hustled his way to fame, fortune, and a long life. John Kennedy Toole? Who would have heard of him, had it not been for the forward-thinking humans who immediately grasped what he was doing? Also, needless to say, traditional publishing only gets better every year—debut authors get more support, marketing budgets grow, and useless barriers are removed rather than added. An industry that so reliably gets everything right can afford no tinkering.
So please, for the love of God, put down that GPU and write the best damn query letter you can.