When Amazon released the Kindle, I was one of the first customers.
I had a first gen iPad and have read e-books almost exclusively ever since.
Heck, I left my job as a literary agent to go work at a tech company. I’ve long been techno-optimistic and have embraced new developments eagerly. I truly believe that on the whole technology has solved more problems than it’s caused. I wouldn’t want to live at any time in the past.
So I’m finding my personal reaction to Chat-GPT and the rise of generative artificial intelligence a bit confusing: Not only am I failing to understand the hype, all I see are net negatives for the book world.
The hype: “good enough” and faster everything!
For a summary of the current hype machine for AI, a good place to start is Thad McIlroy’s recent editorial in Publishers Weekly, where he argues that AI will revolutionize, well, literally everything. It’s right there in the second paragraph: “I believe that every function in trade book publishing today can be automated with the help of generative AI.”
McIlroy makes a key proviso early on about the predictions to follow: “I’m going to be talking about ‘good enough’—about what people will accept, what they’ll buy, and what they’ll actually read. I’m not going to claim that Formula One publishers won’t be able to do a better job than AI on many of the processes described below.”
He goes on to describe revolutions-to-come in copyediting, acquisitions, design and production, and distribution and advertising (arguably this is already here), but his initial proviso is the crux of it all.
And I would ask this: Do we really need more books that are just “good enough?”
Everyone who has played around with Chat-GPT can see the limitations for themselves. No one I know thinks Chat-GPT can currently do much of anything better than a human being. The promise that McIlroy and others offer is that we’ll be able to gain efficiency and produce yet more books that some people will read, with the probably-correct presumption that the majority of readers aren’t particularly discerning.
But what, exactly, does that solve? Book distributor Ingram has twenty million titles available. There are hundreds of thousands, if not millions of titles available for $2.99 or less on Amazon. Does anyone really think the problem with our content landscape is that we don’t have enough options?
Sure, if you’re the CEO of a publishing conglomerate, maybe you’re salivating at eliminating those pesky copyediting and cover design line items before your next quarterly earnings call. Amazon and Ingram likely care not a whit if their databases explode from millions of book titles to hundreds of millions.
Tell me again what this solves for writers and readers?
But what about future improvements?
Whenever I express skepticism about the future of AI for text and storytelling, the AI-optimists in my life say something along the lines of “Yes, it’s crap now, but just you wait!”
Again, I’m skeptical.
As others have pointed out, artificial “intelligence” is a misnomer because the large language models are text predictors that don’t understand anything. They can “write” plausible-sounding sentences, but there’s no underlying logic to be found. It might return the right answer, but no one is quite sure how or why. They “hallucinate” fake information into existence merely because that order of words sounds plausible.
In one of the absolute best articles about the rise of Chat-GPT, Ted Chiang compares the process to an ever-blurrier JPEG and an AI’s “hallucinations” as artifacts of lossy compression. In other words, Chat-GPT regurgitates a lesser version of the original that (in a familiar refrain) might look “good enough.” It’s not creating anything better. It certainly is not understanding anything.
Which leads me back to my skepticism about even future versions of large language models somehow surpassing humans as storytellers: Can you have insight without understanding?
Generative AI can clearly produce wonders with images and deepfakes and coding.
For storytelling, my personal bet is that it’s a technological cul-de-sac. You’ll get blurry versions of books that have already been written, you might get the occasional “even a stopped clock is right twice a day” accidental moment of brilliance, but until the underlying technology gains actual intelligence and understanding, I fail to see how it’s going to result in something better.
Count me out (for now)
Some of the writers who have been espousing the benefits of AI point to escaping some of the drudgeries of the process, like creating outlines and schedules or idea generation when you’re stuck.
I’ve been personally reluctant to rely on artificial intelligence for anything related to writing because I don’t want to inject B+ material (at best) and total garbage (at worst) into my workstreams. (Not to mention having plenty of concerns over what the companies would be doing with my intellectual property.)
I’m particularly skeptical of the idea behind SudoWrite and other AI-assisted writing tools that help you write a book by filling in the gaps if you just give it some overall shaping ideas.
Writing the “connective tissue” in novels is indeed extremely difficult. It’s hard to create original reactions, move characters seamlessly from Point A to Point B, fill in vivid physical description, and strike the right balance with context and world-building.
It’s obviously tempting to wave a magic wand over the drudgery. But that’s truly where the magic in books is created. It’s not your big high concept ideas, which are the absolute easiest things to come up with and are a dime a dozen. The description, characterization, the sense of movement… that’s where the great writers shine.
And at the end of the day, if my only goal was to make money and be endlessly efficient, I wouldn’t be writing books in the first place. The whole point of writing is to connect to deeper truths and to see one’s life and world more clearly. I truly believe that process changes the world.
In its current construction, as Ted Chiang wrote, Chat-GPT only makes things blurrier.
What say you? Any predictions about artificial intelligence and the future of publishing?
Need help with your book? I’m available for manuscript edits, query critiques, and coaching!
For my best advice, check out my online classes, my guide to writing a novel and my guide to publishing a book.
And if you like this post: subscribe to my newsletter!
Art: Moagem de Cana by Benedito Calixto
Cinthia Ritchie says
Yes! Thank you for writing this, especially, “The whole point of writing is to connect to deeper truths and to see one’s life and world more clearly.”
J R Tomlin says
They nothing but aggregators of largely plagiarism data. I find nothing at all admirable about them. I did just for my own information play around with one that, no surprise, turned out meaningless crap.
Christine E. Robinson says
I don’t know about ChatGPT for the future of publishing. But, I use it for research in writing historical fiction. It’s more specific than internet searches for similar information. A surprising outcome.
Nathan Bransford says
Are you not concerned it’s hallucinating the facts?
Tamara L Baker says
I would be. Ask the lawyers who relied on it recently. It made up cases to be used as citations.
Tamara L Baker says
My take: ChatGPT, when used to create manuscripts instead of edit them, is nothing more than rank, badly-executed plagiarism.
But of course publishers love the idea of ChatGPT because it dangles before them the prospect of having totally vertical integration: everything, including the alleged “authors”, is in-house, owned and controlled by the publisher. All they have to do is feed their AI programs material they either own or is in the public domain and then let it generate soulless garbage they think they can sell. Finally, no more pesky human authors wanting real money for contracts!
Dana Fredsti says
It’s a huge “no” for me for all the reasons you — and the other commentors — named.
Gifford MacShane says
I hate to say it but in the short run, I foresee hundreds of thousands of “good enough” or even “not good enough” books being self-published by those who think writing is a quick way to riches (more fools they!)
We’ve just about gotten over the hump of self-publishing meaning “poorly edited”: I think we’re about to enter the phase where it means “reasonably edited drivel”.
Thad McIlroy says
Hi Nathan,
Excellent post. You’re right: that is the question: “Do we really need more books that are just “good enough?” Nope. We do not. But we get them every day, and have been getting them for years. And good enough is often, well, good enough, for certain kinds of nonfiction, and, perhaps, genre fiction. It’s not what we need, but it is what we get.
I see AI is a tool that will be applied to current publishing practices. We will get more of what we’re getting today, produced at lower costs to publishers. This is important, because the industry, as always is only moderately profitable.
Over time… who knows!
I’ve always admired your hard work… you have made an enormous contribution to authoring and to publishing.
Nathan Bransford says
I appreciate you weighing in!
David vun Kannon says
I would say that applying Generative AI to some tasks in the publishing pipeline might be non-controversial, certainly no different than a spelling checker, reading level calculator, or style guide. As an editor, there might be need for caution that what is being suggested by the bot is really regression to the mean.
The appropriate fear is the authorial reliance on an LLM. Or replacement of human authors with RomanceGPT. Mozart invented a system of writing music with dice rolls, but it didn’t replace Mozart. I think unique voices will still stand out, perhaps moreso as the field gets flatter and flatter.