In order to distract myself from my most recent novel winding its way through the process, I decided on a lark to write a screenplay in six weeks in order to enter an annual competition at my alma mater (as one does).
I also had recently heard that the latest update from Anthropic’s Claude (specifically the Opus 4.6 model) is really good at writing. Despite my lingering skepticism about involving A.I. at any stage of the writing process, I’ve been using my evolving screenplay to test out A.I.’s skill at providing editorial feedback and gut check plot threads. Mainly to see if A.I. is coming for my job, but I also just can’t help my innate fascination with new technology.
And honestly? I’m shocked at how good the A.I. has gotten. I still don’t think it’s taking my job, at least not yet, but I have to be honest that I’ve found Opus 4.6 genuinely, unnervingly helpful.
In this post I’m going to give an update on my experience with A.I. as a writing tool, along with some cautions about diving too deeply into the A.I. waters, particularly if you’re planning to pursue traditional publication.
What are the ethics of using A.I.?
Let’s start with the big stuff: should we be using it entirely?
Lots of people, for valid reasons, don’t think anyone should be using A.I. at all because the models are essentially energy-sucking, job-killing plagiarism machines. The publishing industry has a particular stake in this because the major models were trained on pirated books, and there are numerous lawsuits pending.
While I fully support the lawsuits and believe authors should be compensated, in some ways the centrality of books in the lawsuits is more due to quirks in copyright law than truly representative of the massive theft that underpins generative A.I. My books represent a few hundred thousand of my words, but that’s nothing compared to the millions of words I’ve written on this here blog that were likely plagiarized by all of the models.
Given they scraped virtually the entire internet, A.I. was built on the back of everyone who has ever posted anything online. In other words: all of us. And if we had a government that was interested in doing what’s right, these A.I. models would be treated as a public commons because they wouldn’t exist without all of our unpaid labor. These A.I. companies don’t just owe book authors, they owe everyone.
Sadly, I’m not holding my breath on that. And since it seems clear A.I. is here to stay, I at least want to stay abreast of what’s coming. The powers that be are quickly cramming A.I. down our throats so fast it’s soon going to be impossible to function without it, just as you could theoretically still be a modern person without a smartphone, but good luck.
Instead of collective action against using A.I.–which seems exceedingly unlikely to work–governmental regulation seems like the only successful path toward a better A.I. future.
The publishing industry feels very strongly about A.I.
If you’re pursuing traditional publication, you should know that the median traditional publishing employee has THOUGHTS about A.I. that are far more extreme than the average American’s. If you want to get a sense of this, just read literary agent Alia Hanna Habib’s recent post on whether authors should disclose their use of A.I., to which she responded: “If you are using AI to write your book, stay out of my fucking inbox.”
But what about using A.I. for research? Alia: “You can go ahead and disclose that you used AI in the “research” for your book; it’ll tell me I have one less query to respond to as I don’t consider AI research to be research.”
Her post has more nuanced thoughts than just those punchy one-liners, and I do recommend reading it in full, but there are a lot of reasons to be cautious about your use of A.I. if you’re writing a book.
In addition to publishing employee and reader resistance, which is always going to be multiplied by the industry’s longstanding penchant for technophobia, copyright law around A.I. is still extremely unsettled. While the U.S. Copyright office has issued guidance that A.I.-generated work with “sufficient” human authorship may still qualify for copyright registration, the edges of what that means have yet to be fully tested. Just today, the Supreme Court upheld a lower court’s decision to reject copyright for a piece of art fully made with A.I.
I’m not a publishing attorney and this isn’t legal advice, but it seems to me you’re on much safer ground if you steer clear of using A.I. to write anything you put in your book.
Is A.I. actually good at writing now?
People just don’t really seem to want this to be true, so they tend to dismiss it out of hand, but A.I. writing has gotten pretty good. Just take a gander at the “Human vs. A.I.” test in the middle of this recent post on A.I. and memoirs.
In order to test how good Anthropic’s Opus 4.6 Claude and Google’s Gemini 3 Thinking models have gotten at writing, I fed them the first scene of my script and asked them to write the next scene. (I had no intention of using either scene, I was just curious).
Interestingly enough, Claude initially refused, telling me “I appreciate the creative energy here, but I think writing the next scene for you wouldn’t actually serve the screenplay well… If I write the next scene, it’ll inevitably drift from that voice, and then you’ll spend more time retrofitting my version than you would have spent drafting it yourself.” Still, I insisted, then it complied.
Gemini’s draft was not very good. There were some one-liners that kinda sort of sounded good until you unpacked them, including one character referring to another as “the only man who can charm a stone into bleeding.” That line just doesn’t really make any sense.
Claude’s scene? Not. Bad. At. All. It very, very eerily picked up on where I was already planning to go with the second scene, all the way down to one of the characters warning the other in a specific way to stay calm. Claude shades a bit toward a “Law & Order: SVU” quippy, slightly cheesy style that didn’t match the vibe I was going for in my script (Claude: “You walk in here looking like someone died. I walk in here looking like the reason to live.”) but the scene itself was good enough that the hair went up on the back of my neck. It was that unnerving.
Like I said, I am not planning to use A.I. to do any of the writing at all. Not for the screenplay itself, certainly not for pitches and log lines either. Having a robot write for me isn’t writing, and it’s not why I’m doing this in the first place. This part of the exercise was just curiosity.
But how good is A.I. at providing feedback?
How good is A.I. editorial feedback?
Where Claude’s Opus 4.6 really excels is providing clear, quick, nuanced, and sophisticated editorial feedback. It has even helped me get unstuck when I’m struggling with story complexity and keeping track of what all the characters are/would be up to. I’m a bit chagrined and amazed that I’ve found it helpful as a quick form of a gut check.
But it’s not a straightforward process to get to the good feedback.
As Jane Friedman notes after Allison K Williams’ post assessing the models on editorial notes, A.I. feedback tends to have a bit of a flattening effect on your writing. Because it’s trained on what’s already been written, its default is to push you toward the mean, which risks chipping away at your “secret sauce,” the stuff that makes you unique.
In order to get the good feedback, I’ve found that you first need to help the model understand what you’re trying to do in the first place and communicate the style you’re going for. That takes an authorial vision that you can put into words. But once the model has that north star in mind, it does pretty well at giving feedback aligned with the goal to help you get there.
The style of the A.I. output is very encouraging, which is a bit of a double-edged sword. Do I know that when an A.I. is complimenting my writing there’s not any true thinking or feeling behind it? Of course. Does it still work on my psyche to give me a little boost of encouragement to keep going? Also (sadly) yes. If you want a cheerleader to push you along, go for it. Just don’t get A.I. psychosis while you’re at it.
But I’ve found the A.I.’s overarching positivity is most dangerous when it’s telling me problems are fixed when I know they’re not. Take it all with heaping grains of salt.
Be careful letting A.I. get in your head
I’ve been writing and editing stories for several decades now, and while there is always more to learn about writing, I’m pretty confident in my authorial and editorial visions. I believe this makes me a tad less likely to be blown off course when soliciting feedback from a robot.
Particularly if you’re just getting started as a writer, I’d be much, much more careful. It takes a whole lot of time, as well as trial and error, in order to arrive at a voice that feels firmly yours.
And the times we writers are most inclined to seek feedback are also often the times it’s least helpful. I can’t count the number of times I’ve delivered feedback to authors, they dash off a revision of the first chapter, and immediately want a thumbs up or thumbs down on whether they’re now on the right track.
But the first quick sketch of a new draft isn’t an opportune time to get feedback, because typically the new scene needs time to marinate and further revisions to arrive at the actual next draft. When writing is still very much at the “wet clay” stage, it’s a time when you’re very vulnerable to being blown off course by inviting another voice into your head.
If you rely on A.I. like a crutch, and if you use it too early, you may never arrive at writing that feels truly yours.
Is A.I. as good as a human?
This current crop of A.I. still isn’t coming for my job as an editor.
How do I know? I’m 1,000% going to seek out feedback on my screenplay from a qualified human when I’m finished. While Claude has been distressingly helpful as a beta reader, it’s still not as good as an experienced human with a good eye.
Still, I think we’ve reached the point where A.I. feedback may have a place in the process.
I understand all the reasons why agents and editors are anti-A.I., and I’m deeply fearful about what the deluge of A.I.-generated submissions is going to do to the industry. I worry about the industry going back to being a walled garden where a referral is the only way in, and the consequences of that are going to fall most heavily on marginalized and less-connected authors.
But the market is so competitive that good feedback prior to seeking publication is essential. And feedback from someone like me, a published author with industry experience, is expensive (sorry!!). Unless you’re reasonably wealthy or have a lot of time for networking, it’s extremely difficult to get the kind of feedback that can help you get a leg up in the process.
I’m certainly not going to argue for A.I. as a democratizing force for literally anything. I’m horrified by what the powers that be are up to.
But if you’re an author who can’t afford an experienced editor and can’t find quality feedback?
A.I. feedback is better than nothing, and I’d be lying if I told you otherwise. But consider your experience and the climate in traditional publishing before you go in too deeply.
Have you experimented with A.I. editing? I’m curious to hear what you think!
Need help with your book? I’m available for manuscript edits, query critiques, and coaching!
For my best advice, check out my online classes, my guide to writing a novel and my guide to publishing a book.
And if you like this post: subscribe to my newsletter!
Art: Train by Edward Mitchell Bannister
Hi, Nathan. I read your post with great interest. So far, I’ve not used anything more sophisticated than Grammarly and Hemmingway, but I was fascinated by your findings.
Personally, I use online critique groups where real people critique your work. I can’t afford a professional editor, as you say, so this is my go-to solution. (I was so pleased that you acknowledge that all writers can’t afford professional editing as we are always told it’s a ‘must’.)
The publishing industry are able to filter out A.I. produced books, but what about self-published? Are there any safeguards there? I worry about A.I. written books of poor quality flooding the market.
Very thoughtful observations — both yours and Alia’s. I, like you, am fascinated by new technology, though I pretend to be cautious and wary. I use Claude more than other AI tools and have found uses for it that go beyond writing itself.
I wrote 54 chapters of my novel as separate files only to learn that my editor wanted them combined into a single document. No small task, especially with individual titles for each chapter, and a need to apply uniform styles and page breaks. I assigned it to Claude, which took even this large language model about ten minutes to complete while I read my email.
I also discovered that my manuscript contained a mixture of straight and curly quotes. Grammarly can fix that, but it’s a laborious process across 79,000 words. I assigned the task to Claude, and the job finished in a few minutes. I rely on Claude and similar AI tools for technical corrections of this kind.
I have been working on a short story of approximately 9,000 words and requested Claude for developmental editorial feedback. Based on Claude’s suggestions — and my own judgment — I made significant revisions. Claude offered useful ideas, a summary, and thoughts on publication. Even so, when I sent the manuscript to my human developmental editor, she returned nearly three pages of notes. The feedback was more thorough and more insightful, notwithstanding considerably more work for me.
Now for a trickier point. I recently finished Iain Banks’ “Complicity,” a novel that grapples with AI ethics and morality but doesn’t let humans off the hook for the same failings. Deep within the story comes the argument that AI cannot possess ethics or morality because large language models lack empathy and guilt — and that no amount of programming can instill those qualities. People who lack both empathy and guilt tend toward psychopathy, though one could argue the same has been true of the super-wealthy and super-powerful throughout history. Perhaps AI is no worse, in that regard, than what has existed since the time of the Pharaohs and Romans, not to mention current affairs.
Full disclosure: Claude revised my original, but with one error—changing the possessive Banks’ to Banks’s.