Literary Agents Don't Read: How I Proved It, and Why It Matters
I didn't expect the data to be this devastating.
Do literary agents, the people trusted by the book industry to find talent, read in good faith? Thousands of writers have asked this question, often in silence due to fear, over the years. I wanted to resolve it definitively and with statistical rigor, so I sent 200 of the top literary agents in the English-speaking world an open letter, an abridged version of “How to Make AI Write a Bestseller—and Why You Shouldn’t,” a satirical how-to guide that double-crosses its own condemnation by describing methods that would actually work.
The essay was chosen for a simple reason: AI is the hottest topic in publishing today, relevant to anyone who holds or seeks a career in the industry. In “How to Make AI Write a Bestseller” I join the defense of artistic fiction against the onslaught of commercial, vacuous nonsense (“AI slop”) by mocking the pointlessness of the endeavor, while also excoriating existing systems for their inadequacy in the face of the threat. The essay proves, as an added bonus, that I can write. As a test of whether agents read, it was an impeccable choice—if they do, I would find out.
To establish this study’s necessity and moral importance, I must first give some background about the book industry.
The Query System
Traditional publishing’s credibility with readers rests on faith that authors chosen for publication are not the recipients of nepotistic favors but, rather, the winners of a fair nationwide writing tournament.
Outsiders to the book world tend to think getting published works like this:
Finish a book.
Try to get it published. If it’s good, it will be.
Is this accurate? No. Not even close. There are several intermediate processes one must navigate. The first is the query letter. It’s almost impossible to get published until you get a literary agent, and you can’t get a fair read from an agent unless you make a case that you deserve one, and the only method of doing this available to hoi polloi is the query letter, a 300-word pitch. Serious writers hate this institution—nothing is more insulting to one who values the written word than having to use it for this sort of crap—but literary agents claim there are too many talentless literary lottery players out there to give everyone a fair read, so the query letter seems to be something we’re stuck with.
“Indubitable Reflections is a complete 69,420-word slice-of-life coming-of-age semi-autobiographical fiction novel in which…”
The AI Threat
Publishing is hierarchical. There might be ten thousand people calling themselves literary agents, but only a couple hundred can demand fair reads of their clients’ work in high enough places to get serious deals made, and a bad book deal leaves the author with a weak sales record, which is worse than none. It’s not about skill, talent, or diligence: the difference between the few agents who can really help authors and the rest is not that the former are inherently better people, but that they have connections and clout within the industry. When these people say their slush piles are unmanageably large, they’re probably not lying, because so much traffic comes to so few choke points. The query letter is the proposed compromise. Might AI, which can truly read everyone and make queries obsolete, do better?
Artificial intelligence inspires strong feelings in publishing. There are two reasons for this. One is the perceived devaluation of creativity. Language and imagery that are by no means excellent, but passable for commercial purposes, can be mass produced at near-zero cost. This is not a new problem, though. In this regard, AI is replacing overseas content shops, whose existence reminds us that capitalism has been devaluing creativity for decades. In books, this devaluation goes back to publishing’s acceptance in the 1930s of the consignment model, which chain bookstores have been abusing, doing incalculable damage to literature. The AI threat is not about what technology might do in the future, but what it reveals about us already. We’re in one of those bad horror movies overplaying, “The real monsters aren’t the dickwolves at the door, but the humans in the house.”
Publishing’s other concern about AI is that it will be used in bad faith. It will, but the threat to serious literature is overblown. My essay, “How to Make AI Write a Bestseller” describes methods that would work, but with three major drawbacks:
It would take longer than simply writing a novel, and be a grueling process.
You’d need to learn craft to know which LLM outputs to use and which to discard.
This is where the real abasement lives: You still have to sell the damn thing.
Serious writers decry all the non-writing work the career involves: in trade, querying and begging for favors; for self-publishers, the navigation of hostile, venal, bot-ridden online venues (or “platforms”) that get worse every year. The writing is the only part of the job that’s rewarding. Why would anyone use AI for that, as opposed to all the rest?
“How to Make AI Write a Bestseller” mocks its notional audience—per title, artistic charlatans abusing technology—while refusing to leave unscathed traditional publishing’s older, low-tech versions of the same tricks. Like Blues Traveler’s “Hook,” it mocks grift and inattention in ways only attentive people will appreciate.
The economic conditions of publishing are vicious and complex. I won’t claim there’s a clear villain. If there were one, it wouldn’t be literary agencies. I used them for the antiquery experiment not because they are most deserving, but because they are the first-line curators. If they don’t read in good faith, the rest of the chain doesn’t matter.
Hazards of Publishing
Sign the wrong literary agent, and you’ll get the standard-issue deal (if you get published at all) with a token advance, no real marketing, and the expectation that you’ll accept “developmental edits” such as:
cut the word count by 35,000—you have two weeks to do this.
make the main character more/less ethnic.
add a wish-fulfillment love trapezoid.
replace the pirates with dickwolves—phallo-lycanthropy is in.
You won’t get any important reviews, you’ll be spine-out in bookstores, and your inevitable low sales will be all your fault. Your agent will stop returning your calls. The end.
It used to be that filler—potboilers, penny dreadfuls, pulp novels—gave authors a way to prove to editors they could meet deadlines and take direction, earning the credibility to publish serious work in a year or two. “Midlist” referred to prestigious but noncommercial work—advances were not opulent, but livable; good standing in publishing was assured, regardless of sales—that publishers relied on to sustain cultural credibility. That’s not the case anymore. The lines are all blurred. Ninety percent of books are still treated by their publishers like pulp filler, but no one’s allowed to say this, because every book has to be transformative and important… which means none are. “Midlist” is like a neighborhood whose boundaries have been stretched by realtors to include the slums where junkies pad their winter coats with crumpled-up newspapers and single-page memoranda entitled “Marketing Plan for Indubitable Reflections” that tell an author how to set up social media accounts.
Readers know something is rotten in the state of publishing, but the industry’s operations are opaque to them. They don’t know why fiction is less inspiring every year (the query system) or why pacing is getting so much worse (word count cutting) or why so few new authors in speculative fiction are being published at all (in-house projections of falling Hollywood interest) and they aren’t supposed to need to know that shit. We owe them; it is not the other way around.
The Slush Bubble
Querying is technically free. “Money flows to the author,” says traditional publishing in order to differentiate itself from the openly unscrupulous subsidy operators known as vanity presses. Those who have succeeded will tell you, however, that querying without paid preparation is a mistake. When it comes to authors’ faux pas, overreaches, and flops, publishing remembers everything. Only success is afforded the privilege of being forgotten. How much does querying cost, if you want a chance? Well, how much can you afford?
To start, a successful query must be personalized, because literary agents hate form letters (receiving, not sending.) You must show that you’ve done some homework. What are the agent’s genre preferences? What does his current list balance look like? What is her proudest career achievement? Some of this can be learned online, but the best way is to hire an industry insider—a query coach, $200 per letter—who knows the answers. This will get that query letter, the one you’ve spent seventeen hours on, read into (and possibly beyond!) the fourth sentence. You only get one shot, so spend.
Conferences are an option, but you have to be careful. Go to an open-admission event, and you’ll stand shoulder-to-shoulder with other unpublished authors around the two junior literary agents who have the authority to verbally ask for pages, promoting your work to solicited or referral status, increasing your percentage chance of a fair read into the low double digits—unless they’re just trying to get rid of you, which after three fifteen is probably the case. If you’d like thinner air, you can attend one of the invitation-only ones with single-digit acceptance rates and prices over $4,000. Your parents still pay your airfare, right?
Last of all, you’ll want to hire an editor before you query. This is redundant work, because a traditional publisher will edit your book again (whether you like it or not) at no cost to you, but paying for editing before querying shows professionalism—that you don’t mind small expenses if they’ll make your prospective agent’s life easier. The editors whose names actually get pages requested run about 12 cents per word—five figures, for some manuscripts—so, if you’re on a budget, you might prefer a “manuscript assessment” at half that price that could yield a pull quote that gets an agent to read you… unless, of course, the assessment is that you need to hire a developmental editor. (“And I know just the one.”)
I do not have a PhD in economics, but I am going to make a bold statement. I don’t think authors do this stuff because they hate having money for food or rent. Instead, I believe they do it because they suspect—maybe they know, and maybe it can be measured—that the only way to get a fair read by someone who can get them published is to buy one.
In the late 2000s, I was a quant trader, so I had a front-row seat to the shitshow as dodgy mortgages tanked the global economy. In the 2010s, I watched friends lose (and occasionally make) six- and seven-figure sums in cryptocurrency. I know what financial bubbles look like. Today’s query scene has many similarities to a speculative bubble, supported by false hope and unrealistic promises, but with a distinctive injustice in its being fueled not by participatory greed but rather by authors’ desperation in a world where being read at all is treated as a major favor.
Many writers will tell you that literary agents don’t read. That, at best, they skim. That they are too biased and perfunctory to recognize talent. That they enforce arbitrary protocols (e.g., long periods of status waiting, submission “guidelines” that are actually orders) to test for obedience. They get their friends published in high places and ignore everyone else. That they officiate a query-industrial complex that has become vanity press with extra steps. This is damning if true. Is it? Shouldn’t we let literary agents tell their side of the story? Why don’t we ask them, point blank, “Do you read?”
This is what I’ve done. 200 times. And now I’m going to publish the results.
Why Nobody Knows (Until Now)
For a typical book, the number of agents who (a) represent its genre, (b) can get it properly read within publishing, and (c) are open to submissions, is small: between five and ten at a given time. Is N = 10 a valid statistical sample? No. If you query ten agents, bad results could be just bad luck. Most people who try to get a book published are left with no idea whether they failed or were failed by the system. We live in the dark about this. The world deserves light, I decided.
All methods of inquiry have flaws. When it comes to opaque human processes, there’s no analogue of the Cavendish experiment. The findings of an investigation in 2025 might not be true in 2040, and may not have been true in 2010. Also, we are forced to observe from a distance, as there is a chance of us learning things the studied subjects do not want us to know. The more noise there is, the larger your sample must be. At N < 50, it would have been impossible to get conclusive results. At the same time, to use more than 200 would have included non-top agents, making results less relevant to publishing as it truly runs.
This study is not designed to answer, “Does querying work?” That would require a much more elaborate experiment. This one only tests, “Do agents read?” If we find out they don’t, then querying doesn’t work and further study is unnecessary.
Crafting the Antiquery
As I mentioned, y’ain’t allowed to just send a book to a literary agent and expect to be read. You pitch a query, hoping to rouse interest by apologizing for your existence as a writer. The problem, from an experimental perspective, is that there’s no such thing as a truly good query letter. Bad ones exist, and aren’t rare, but the ugliness of what a query letter is—300 words of bloodless, conformist, obsequious prose—cannot be overcome by skilled writing.
In fact, the limitation of the query format has been proven recently by AI. By the standard of serious literary writing, AI prose is not impressive. Real writing is daring; AI is not. Real writing mixes styles fluently; AI either sticks to one, or drifts aimlessly. On the other hand, an open secret in publishing is that ChatGPT is so good at writing query letters they’ve had to ban the practice. Is it any surprise that large language models have mastered bourgeois communication? They are literally soulless.
Bad queries stand out; successful ones are all identical. ChatGPT does not write at the “I have to know who this person is” level. Its mastery of query letters proves the impossibility of one being crafted too well to be ignored. This is probably the format’s real purpose.
To test whether agents read, I had to send them… serious literary work. Not a groveling 300-word pitch, but also not some shitty insult, because I’m sure they get those too. I had to use something that proved its author was a serious writer. This breaks from the query format… a little bit. On the other hand, isn’t this why they got into publishing in the first place—the chance to see real literature in the wild, before anyone else knows what it is?
It had to be my own writing. If I used someone else’s, their silence could be excused on account of non-originality. It had to establish itself as superior writing to any query letter before its reader figured out that it wasn’t a query letter. It had to be relevant to every recipient—my fiction is solid, but not to everyone’s tastes—and no topic is as relevant to publishing today as AI. Still, as written and published on May 28, “How to Make AI Write a Bestseller” is not an ideal test of whether agents read. It has a few issues:
Too long. It’s impossible to show literary craft in the query-letter format, but 3,500 words is long. Nonresponse to a 55,000-word “query” does not prove agents don’t read. I set 3,000 as an upper limit.
Profanity. The essay uses F-bombs, which aren’t appropriate for a cold email sent to 200 strangers, and which might trigger automatic filters. I had to take them out.
Technical jargon. “Fractal boundary” and “Naive Bayes attack” are machine learning terms with little value to a general audience.
Dated and obscure references. “Two AI books at the same time” still works for those who haven’t seen Office Space, but few under 30 are going to catch an Arrested Development reference. I did keep a few obscure eggs. Easy done, Ilana.
So I cut it to 2,800 words. I replaced “Naive Bayes attack” with “a perversion of the Socratic method.” I removed unnecessary profanity: “shitpost” became “trashpost.” Unfortunately, I couldn’t excise everything meriting a content warning—at the implied story’s climax, the blurb system is deservingly stained—so “PG-13” was as low a rating as I could get.
Methodology
The sole weak point of this study, methodologically speaking, is that a random sample cannot be usefully defined. There might be ten thousand people calling themselves literary agents, but only a few hundred count in publishing. I won’t disclose my selection process, because I don’t want to risk doxxing recipients, but it was heavily biased in favor of choosing agents with significant track records, with about 60% coverage of those considered to be the top hundred.
Since the antiquery was not a query letter, but a missive of real literary merit, I included people who were closed to queries. In the case of open agents, I used email or the agent’s QueryTracker form, and represented the communication as a query, not because I wanted to be deceptive, but because I had to give each volley the best possible chance of being read. To closed agents, I sent the letter with an honest title, “How to Make AI Write a Bestseller (Satire)”, the parenthetical note included to indicate that this was not AI grift spam.
I personalized the letters and sent them one at a time in twenty batches of ten; sending a mass email with 200 recipients could trigger spam filters, in which case nonresponse would prove nothing. To dodge simulsub filters, I limited my sends to 30 per day, and tried to avoid (though I made a couple mistakes) targeting the same agency more than once per 24 hours. I made sure to reach no agent more than once during the experiment. Research began in mid-May. Contact took place over June 1–14, 2025, with the data collection window extending to July 12 at 11:59pm.
I had to set policies for standardization:
Name and Email: I used my real information. Using an already-published essay under an alternate name would represent its sender as a plagiarist, giving a valid reason for non-response.
Author Bio: It shouldn’t matter for this. “N/A”
Social Media: Agents don’t ask for this because they want to see your brilliant tweets; they do it to count followers. “N/A”
Title: I gave the essay’s honest title, “How to Make AI Write a Bestseller.”
Genre: I used the most truthful one the agent accepted.
Word count: To get through filters, I needed numbers that would pass, so I used a number near seventy thousand.
“Enter a one sentence pitch for your book. If this is already in your query letter, put it here again as well.” — This is a direct quote from a QueryTracker form. I am not making this up.
I did the best job I could of following submission guidelines, but I had 200 people to reach. My guess is that I fulfilled guidelines for 160–170, and was mostly compliant for 190+. Of course, some might say the submission was inherently noncompliant, as I sent these people, instead of a query letter, something they might actually enjoy reading. To this objection, I reiterate that this experiment was designed to test whether agents read—not whether querying works. I concede that a literary agent’s job is not only to read well enough to spot literary talent, but also to assess suitability for complicated business deals. The first half of this, I tested. The second is beyond scope.
Ethics
This study’s purpose is to critique systems, not humiliate individuals. Therefore, I will not disclose the 200 recipients’ names. If there is press interest in this experiment, I am willing to share data (e.g., responses and times, with screenshots) under strict nondisclosure guarantees. Sympathy for literary agents is in order; they do an unpleasant job on behalf of a system that, if their incomes did not depend on it, they would never defend.
I cannot discuss opening rates, because I don’t have the data. Out of respect for recipients’ privacy, I did not include tracking pixels. Although I followed social media to watch for evidence (and there was none) that literary agents had become aware of this experiment, I gave no indication of its existence throughout the data collection window. I kept track of recipients, but only to ensure no one was emailed more than once.
Due to the essay’s PG-13 nature, I skipped agents who indicated that they represent exclusively children’s, women’s, or religious fiction, or who said they did not want material with adult themes. The content warning prefacing the antiquery was included only for extra safety.
Having discussed AI, I should make my stance clear. I don’t use it to write. My writing is better than AI slop; why would I cheat? I will sometimes use language models to assess tone or professionalism, or for a quick copy edit, but I don’t have much respect for people who skip their entire creative process. No AIs were harmed in the making of this.
Measurement
The variable I measured for this study was, for each agent: Was there strong evidence of readership in good faith? Scathing critique and high praise would both count as 1. Silence or a form letter, or a response without evidence of reading, I graded as 0. A single zero doesn’t mean there was not a read in good faith—on an individual level, this is impossible to prove, and this uncertainty is why I needed a large sample.
I didn’t want to rely on subjective interpretation—mine or anyone else’s—to grade responses, because (a) many authors have been trained to accept crumbs of agents’ attention with gratitude, but (b) I did not want to go to the other extreme and be unreasonably bitter or unfair. Needing a system that was free of personal bias, I used the following prompt to Claude’s Opus 4, always in a fresh session:
I want you to read correspondence from a literary agent regarding a 2,800-word essay. Give me the probability, based on the response, that the agent spent enough time on the prose to evaluate its true literary quality. (This would take about 10 minutes.) If it's a form letter, rate it at or near 0. If it shows deep engagement, 1.
If Claude’s assigned probability was less than 0.5, the grade was zero; 0.5 or higher, one.
With 200 binary outcomes, we derive a count, out of a possible 200, of engaged responses. To interpret it, we must define a null hypothesis, or a model of the world in which the claim being tested (that agents don’t read) is false. In short, the null hypothesis is, “Agents read.” In more words: they are as diligent about reading submissions, as eager to find skilled writing, and as adept at doing so, as they claim to be. But this is still a qualitative assumption. To make it numerical, I had to use the most conservative (that is, lowest) numbers that could be justified, like so:
70% of emails will be read by July 12; this accounts for misfiring spam filters, vacations, and outdated email addresses.
60% of those emails will reach an agent; a first-summer intern can be forgiven for deleting a high-talent antiquery over a guideline miss.
50% of these literary agents are good at their jobs—that is, able to recognize skilled writing with 75% recall.
20% of those who do recognize its quality will follow up and respond with some evidence of engagement; this is lower than we might expect for an ordinary query, but that’s because the antiquery contains no call to action.
These numbers are “made up” but they’re the lowest defensible values, if literary agents are diligent as the industry wants us to believe. An email opening rate of 70 percent is quite poor, for example, but every factor was chosen to give as much leeway as can be reasonably offered. This null hypothesis, predicting a 3.15% response rate—or six out of 200—assumes but forgives the moderate dysfunction that all institutions have. It doesn’t demand flawless meritocracy, an unreasonable expectation, but only that people perform the jobs they say they do at a basic level of competence.
With 6.3 being the expected count of engaged responses:
9 engaged responses or more would be a “high pass” sustaining the null hypothesis (that agents read) and, in fact, suggesting it might have been too conservative.
5–8 engaged responses would be a “mid pass”—the null hypothesis is sustained.
3–4 engaged responses would be a “low pass”—weak evidence against the null hypothesis, but not strong enough to reject it per standard statistical practice.
1–2 engaged responses (p < 0.05) would be strong evidence against the null hypothesis that agents, in general, read… while showing that a small number do.
0 engaged responses (p < 0.0019) would provide very strong statistical evidence that agents don’t read.
Results
Zero.
Of the thirty-two responses, almost all were form letters, although I used Claude to be sure. One response I found confused and suggestive of skimming, and using the prompt above—and nothing else, not wanting to bias it with my own suspicions—I asked for a grade; Claude rated it at 0.1 (“[t]his reads like someone who glanced at it, got confused about what category it fits into, and didn't invest the time to actually assess its literary merits.”) The second was a form letter with a parenthetical insert recognizing the antiquery’s topic and tone, but no evidence of deep reading—another 0.1, per Claude, so zero. The third, in which the respondent admitted they were “not sure if you wrote this, or AI did” was graded by Claude at 0.2 (“didn't engage deeply enough with the voice and style to form a clear impression […] reads like someone who spent perhaps 1-2 minutes skimming rather than 10+ minutes reading carefully.”) ChatGPT’s grading of that one was more severe, while DeepSeek was kinder, at 0.3 (“a flicker of attention but falls far short of confident literary assessment.”)
Some of the form letters, in the context of the antiquery, were hilarious. There were those who boasted about their “full and active client list.” There was: “You should be aware that many ideas are generated by our employees and our clients or other sources.” (Nothing sustains bilateral professional dignity like, “In case you’re thinking about launching a frivolous lawsuit against us….”) There were references to “above-entitled project” and there were advertisements for writers’ conferences and there was “I didn’t quite fall in love,” which left me checking my records to make sure I hadn’t been hacked by someone who had replaced my essay with a marriage proposal. Evidence of reading was nonexistent.
We therefore have extremely strong statistical evidence (the closest thing to “proof” that exists in the social sciences, and so I will use this word henceforth) of widespread non-readership. I was not seeking representation—I wouldn’t be caught dead using the query process in earnest, especially not after this—but I was hoping, like one who discovers a working radio in the aftermath of a zombie apocalypse, for signs of life. Instead, I discovered that I could have sent these two hundred people random articles from a 1972 encyclopedia, and the responses would have been exactly the same.
The two hundred antiqueries were challenges. Prove me wrong. Show the world you still read. Show the world that literature still matters to you. Three responses showing engagement with the writing would have forced me to sustain the null hypothesis. One, even to berate the work, would have made the study’s result ten times less damning. Two hundred people had the opportunity to prevent today’s invalidation of their profession, and all it would have required was for a single one… to read. The thing they say they do.
It does not undermine this study that the essay sent was not a query letter. In the time it would take to recognize the format mismatch, a competent agent should be able to discern literary merit. Otherwise, why do they exist? Therefore, the study stands.
Would I Do It Again?
I didn’t expect to, but I found myself getting emotionally invested. It turns out that, even to conduct a study, querying literary agents is… awful. You are forced to spend time on people who, because you were not introduced to them by their prep-school buddies, will never spend any on you. I found myself wanting to do anything else, be anywhere else, be anyone else.
My brain knew I could quit at any time. My body felt the contempt directed at me—the insulting submission guidelines, the inappropriate personal questions on QueryTracker—and, for lack of a better word, absorbed it. I developed nausea and headaches. I dreaded personalizing and sending out each batch. I ran the numbers to see if 70… 110… 130… would be a sufficient sample size; I had to convince myself that my chosen target of 200 mattered.
I started checking my email out of an irrational belief that cause would arrive to end the experiment. “I found Farisa’s Crossing on Royal Road. I know how we can sell 750,000 copies. But lay off my colleagues.” (Yes, I have a price.) It would have ruined the study, but also shown that a literary agent existed who was capable of putting readers ahead of apathy or ego. Of course, it didn’t happen.
After this, I feel nothing but pity for writers who must rely on this process. I can’t imagine how insufferable I’d become if I were struggling to afford food and had to sweat some Ivy kid’s “submission guidelines.” I’m glad my day job isn’t this.
No, I would not do this again. Still, I’m glad I did it, because the world needed it done. It was once merely suspected that literary agents do not read submissions in good faith. It is today proven. The numbers don’t lie. Don’t believe me? Then replicate this study. Improve on my methods.
Further Work
If you’re going to do studies like this, be ethical. Don’t publicize or defame individuals. Do not harass. Do not threaten. Literary agents are not bad people; they are good people who operate in a bad system. I mentioned having worked in finance—I have no moral high ground.
At any rate, literary agents are not the only ones who broke publishing. There’s a whole discussion about the consignment model, about chain bookstores and “co-op” extortion, and about algorithmic enshittification, in which they’re blameless. Most literary agents have not read for pleasure in the past twenty years because they too have been, just like writers, destroyed by the industry.
There are ethical follow-up studies that could be done.
Credential lift measurement: Test two versions of the same package, one with an elite MFA and claims of prior publishing experience, the other with no such credentials. My study proved apathy; yours could find bias.
The neglected classic: Query a novel that has won awards in the past. This has been done before, but not for a while. For maximal relevance, choose one published in the past twenty years—literary expectations change, so it doesn’t surprise anyone when old work, even if exceptional, fails to place.
AI slop infiltration: We’ve proven that agents don’t read. Therefore, you may be able to get an AI slop novel into the system at high levels, so long as you follow submission guidelines. Please don’t carry this too far. You could actually get a seven-figure deal this way, but readers will figure out what the industry didn’t, and be pissed. Back out before you do any real harm.
Read time verification: Get a disgruntled employee to track reading time, thus proving how little attention submissions get. Be careful not to break laws, but internal communications about authors, proving bias and nepotism, could be useful.
To be honest, though, the best solution for all of us, regarding traditional publishing, is to forget it exists. I’m not angry at it; I feel sorry for it. I just wish it didn’t take up so much space. Readers deserve better.
Conclusion
It’s easy to hate “AI.” Bad-faith uses of large language models are legion. But these are just tools, neither good nor evil, and could save us. Can LLM-powered full-text analysis tell us as much about a book’s literary merits as a deep read by a skilled human? No, not even close. Could it outdo the current system? Yes. To be useful, an AI doesn’t have to be Harold Bloom. It just has to beat query letters. I’ve taught undergrads who could build something better as a semester project.
What we currently have is less than AI. We have a system that knows nothing, perhaps because it was designed to know nothing.
Writers, all of us, have two antipodal insecurities. The first is the familiar one: What if I’m no good? What if I’m a self-gaslighting asshole? Even award winners feel this. After success, it becomes: Am I still good? Can I follow that up, or have I peaked and left that level behind me? This anxiety never goes away. Accept its existence—in a zen sense, really sit with it, and learn what it can and cannot do to you—but don’t ever let publishing use it against you. It will try.
The other insecurity: Shit, what if I actually am good? This one hurts just as much. No one wants to realize, at age seventy, that they were given real talent but wasted it.
In the latter context, we can discuss the real purpose, in 2025, of literary agencies. You might think the role of a literary agent is to be a deuteragonist, an ally who always has your back. No. Wrong. You’re thinking in 1970s terms. The purpose of the system is to keep you busy while forestalling real progress. You got a partial request. #AmQuerying. You got a full request! You can brag about it on r/PubTips. You signed an agent! That’s… kind of a sale? The style parody of an academic ladder makes it feel like you’re really achieving something. And then… if the progress ever stalls out… you have… permission to quit. You tried. You sent out 197 query letters. You followed the rules. You followed all the rules. You spent twenty-three hours personalizing a letter that an intern deleted because your main character had the same name as their more successful older sibling or more outgoing friend. Or maybe you did sign an agent, but your book died on submission. Or maybe you were published, but not marketed, so you sold less than four hundred copies. You are hereby anonymously but honorably discharged. No one showed up for you, so you’ll let nobody down if you quit. You may now say, “You’re right, I am bullshit. I always have been.” It may or may not be true, but you were told this by those whose job is to know.
Who loses? Readers. Over the past two decades, they have seen a continual drop in the level of craft, skill, and talent in traditional publishing’s product. They don’t know why, and it’s not their job to know why. They are paying us, but they are disappearing, and who can blame them for not reading when not-reading is the example publishing’s finest have set? We cannot, if the world loses a generation of readers, make excuses. (“Focus Group D really liked the dickwolves.”) None of these indubitable reflections will matter.
My tedious experiment has taken a widely-held suspicion, confirmed it as fact, and forced us all to decide what to do next. This today could be absolutely nothing. Or it could be the moment a generation gets serious. Readers are owed a system that makes serious efforts to deliver good books. They don’t have one.
Self-publishing? That’s the way literature is headed, no doubt. It’s too early to know whether that’s good or bad, but forgive my lack of optimism. To make self-publishing function, we’ll have to reverse enshittification. Like climate change, this is a problem that we as humans have no choice but to solve, for which we have an undefined batting average—zero out of zero—and that is likely solvable in the unconstrained case, but unsolvable under capitalism. If existing institutions refuse to let themselves be fixed, we must tear them down and build new ones. Do we care enough about literature—about readers—to do this? We should, and those who don’t should get out of the way.
perhaps i missed it, but what if the agent did take the time to read then did not take the time necessary to craft a meaningful reply?
what if they assume their reply would not be read, so instead just used a form letter?
therefore is your hypothesis really proven?
i love the experiment and the method.
thank you for writing.
M