The Great Automatic Novelizer

by Rebecca Baumgartner

Carpets…chairs…shoes…bricks…crockery…anything you like to mention – they’re all made by machinery now. The quality may be inferior, but that doesn’t matter. It’s the cost of production that counts. And stories – well – they’re just another product, like carpets and chairs, and no one cares how you produce them so long as you deliver the goods.

So goes one of the more biting sections in the delightfully mordant 1953 short story by Roald Dahl called The Great Automatic Grammatizator. The story focuses on a man who we’d refer to as a computer scientist today. He’s just finished developing a “great automatic computing engine,” at the request of the government, but he’s unsatisfied. You see, Adolph Knipe has always wanted to be a writer. The only problem is, he’s terrible at it. Publishers keep turning him down, and it’s no surprise when we read that his current novel begins with “The night was dark and stormy, the wind whistled in the trees, the rain poured down like cats and dogs…

However, Knipe has an epiphany one night. He’s already successfully built a computing engine that can solve any calculation by reducing it down to the fundamental operations of addition, subtraction, multiplication, and division. Why couldn’t he do the same with stories? All he’d have to do is teach the machine English grammar, program the parameters of each major publication’s style, and the machine would write the stories for him!

Photo by Blaz Photo on Unsplash

Thus begins his invention of the Great Automatic Grammatizator, a machine that can produce a story in the style of all the best publications at the press of a button. Knipe works like a madman to build a prototype and goes back to work to show it to his boss. After hearing how the machine works, his boss (a Mr. Bohlen) says, “This is all very interesting, but what’s the point of it?” Knipe explains how it could be used as a money-making tool, but the boss still isn’t convinced, knowing how expensive it would be to make and run such a machine. Knipe spells it out:

“Don’t you see that with volume alone we’ll completely overwhelm them! This machine can produce a five-thousand-word story, all typed and ready for despatch, in thirty seconds. How can the writers compete with that? I ask you, Mr. Bohlen, how?”

As the project develops (and Mr. Bohlen’s ethical compunctions are quickly gotten over), the two men develop a scheme to get writers to sign a contract stipulating that they won’t write another word and will give up rights to their name in exchange for a lifetime of payouts. Predictably, writers don’t take warmly to the scheme, with one of them even chasing Knipe down a garden wielding a large metal paperweight and shouting a “torrent of abuse and obscenity as he had never heard before.”

While reading The Great Automatic Grammatizator, I kept having to remind myself it was written in 1953. It could have been written about ChatGPT, with only a few changes to update the sophistication of the technology from levers and gears to prompts and training data. 

Photo by BoliviaInteligente on Unsplash

Knipe’s reference to overwhelming the market with garbage is uncannily similar to recent controversies over the glut of AI-generated books on Amazon. If you’ve searched for books on Amazon recently, you may have noticed the proliferation of “companion books” that purport to offer analysis of another book. I had assumed these were similar to an even lazier version of CliffsNotes, but the overwhelming majority of them are actually generated by AI. According to the Author’s Guild, “These AI-generated scam companion books rarely rise to the level of fair use, as they have little to no original analysis or commentary and are meant only to confuse consumers and skim sales off of the real books.”

And just like the paperweight-wielding author in Dahl’s story, writers are quite justifiably pissed. Some are outraged about Big AI scraping the internet for their copyrighted material and using it as training data without consent, credit, or compensation. Some take offense at dumbed-down AI writing assistance tools that devalue and trivialize the craft of writing through features such as “emotion dials,” which as far as I can tell are equivalent to the “passion throttles” of the Automatic Grammatizator, which “somehow or other could transform the dullest novel into a howling success – at any rate financially.” Collectively, writers and readers all seem to feel that generative AI is generally making the world of literature more like Adolph Knipe’s Literary Agency, the shadow company behind the Automatic Grammatizator, whose only goal is to churn out as many novels as possible.

There’s also the controversy unfolding within National Novel Writing Month (which is shortened to the silly acronym NaNoWriMo), a nonprofit that challenges aspiring writers to create a 50,000-word manuscript in the month of November each year. The NaNoWriMo leadership initially endorsed the use of generative AI tools to help writers meet their goals, even going so far as to say that those who object to the use of AI are classist and ableist (that post has since been taken down, but you can read an archived copy of the original). 

That strange proclamation – which I’m sure had absolutely nothing to do with the AI company ProWritingAid being one of the NaNoWriMo sponsors this year – blew up in their face. Their current stance is a giant awkward shrug that leaves the decision to use AI tools up to each individual writer, which they justify by the fact that AI is “too big and too varied to categorically support or condemn.” It did not seem to occur to anyone that perhaps being “too big” is itself reason enough to condemn, or at least carefully consider, using such a tool. The lack of transparency about their own financial ties to an AI company also smacks of dishonesty, and many writers, sponsors, and board members have since cut ties with the organization.

Putting aside any one specific controversy, the idea of using AI to write is an intriguing concept and one that, as a frequent writer and even more frequent reader, I feel personally invested in. As a writing tutor by profession, I am also only too familiar with the desire of novice writers to find a shortcut that will allow them to skip the work of reading, thinking, writing, and testing one’s ideas against the world and other people. It would be too easy and not very helpful to say that students today are uniquely lazy or dependent on technology, because the core issue isn’t unique to writing, or to AI, or to any specific age group. 

Photo by Growtika on Unsplash

Rather, I think the problem is our all-too-willing acquiescence to mediated experiences. We have trouble resisting outsourcing our thinking when something comes along that promises to think for us. This is one of the central ideas of Matthew B. Crawford’s book The World Beyond Your Head (2015). Mediation, generally speaking, is when technology not only acts as a go-between for us and the world, but in crucial ways determines how we are able to interact with that world and what possibilities we see within it. 

One example of this is the way that cars, as they have become more like computers on wheels, mediate our experience of driving. Features that check our blind spots for us, back-up cameras, features that alert us to lane drifting, and so on, insulate us from the physical act of driving in ways that let us conceptualize driving as an activity no more dangerous or worthy of alertness than scrolling on our phones or watching a movie. With their focus on comfort, convenience, disconnection, and disembodiment, these kinds of features assume more of the burden previously borne by the driver’s skill and attention. 

The point is not to try to live without mediation, which would be impossible and probably not desirable. The point is to be intentional in choosing which mediations to use, and know when not to use them. Direct interaction with the real world has its benefits. Seeing and feeling things for yourself, even if it means experiencing inconvenience, has its place. Doing the work yourself is not just an aesthetic choice, but can be an ethical one, too. In other words, directly meeting the world as it is can give you knowledge that is different in kind from knowledge you arrive at through layers of mediated feedback, and there are instances where this difference has a moral valence, such as when operating heavy vehicles.

Crawford emphasizes that this idea is not inherently anti-technology. Rather, he lays the blame at the feet of our modern consumer ethic, in which we want what we want now, with as little effort as possible. In this sense, technology is merely the handmaiden to our impatience and desire for quick fixes. Crawford writes that “disconnection – pressing a button to make something happen – facilitates an experience of one’s own will as something unconditioned by all those contingencies that intervene between an intention and its realization.” In other words, we want to press a button and make annoying stuff go away. That annoying stuff could be watching for pedestrians and keeping our eyes on the road, or it could be thinking critically and learning how to write. All manner of effortful cognitive tasks can be made to disappear.

It’s easy to see how tools like ChatGPT perform this function in our lives today. Technological mediation lets us skip the cut-scenes in the super-annoying real world, blurring the distinction between a creative impulse and a finished product in a way that fetishizes the finished product over the process of making it (“no one cares how you produce them so long as you deliver the goods”).

This is bad for art and bad for making artists better at what they do. It’s also bad for our moral development. Simply press a button and a chipper and servile ChatGPT serves you up a slurry of repurposed content (or, in the colorful words of commentator and author Chuck Wendig, “artbarf”) to assuage your impatience, reinforcing and validating that impatience along the way. One popular writing program based on GPT-3, Sudowrite, promises to help you write an entire novel, in “your” voice and style, inside of a week. Someone who gets used to that level of output is almost certainly not building the skills that would allow them to become a stronger, more creative writer – or a more patient, attentive person. They’re also eventually going to be alienated from their own creative output in a way that erodes the meaning of the act. 

Photo by Unseen Studio on Unsplash

In Dahl’s story, the Great Automatic Grammatizator leads to this same kind of impatience:

“‘I want to do a novel,’” he kept saying. ‘I want to do a novel.’

‘And so you will, sir. And so you will. But please be patient. This is a very complicated adjustment I have to make…We’re going to do novels,’ Knipe told him. ‘Just as many as we want. But please be patient.’”

After Knipe pulls the switch, the machine spits out a ream of typed pages. “Congratulations on your first novel,” Knipe says. Mr. Bohlen, sweating from the exertion of playing the stops of the passion-gauge and pace-indicator and chapter-counter, says, “It sure was hard work, my boy.” 

We find this caustically funny, of course, because moving dials around is a piece of cake compared to the actual work of writing, and Mr. Bohlen hasn’t done anything at all. But the caricature shows us just how eager we are to skip the hard parts of creative work and find reasons to justify the shortcut. We look desperately for shortcuts when it comes to writing, output quality be damned. Students and novice writers often say things like, “But what if I only use ChatGPT to help me brainstorm ideas?” (As though deciding what you want to contribute to a topic was the least significant aspect of creating something!) “What if I just use ChatGPT to create a first draft and then tweak it to say what I want?” (As though first drafts aren’t a necessary step in figuring out what you think and what you want to say!) 

These are the types of ill-founded questions that science fiction author Ted Chiang deftly addressed in a New Yorker piece last year titled “ChatGPT is a Blurry JPEG of the Web.” The article is worth reading in its entirety, but his perspective on the writing process gets to the heart of what’s troubling about using ChatGPT in this way.

“Obviously, no one can speak for all writers, but let me make the argument that starting with a blurry copy of unoriginal work isn’t a good way to create original work. If you’re a writer, you will write a lot of unoriginal work before you write something original. And the time and effort expended on that unoriginal work isn’t wasted; on the contrary, I would suggest that it is precisely what enables you to eventually create something original. The hours spent choosing the right word and rearranging sentences to better follow one another are what teach you how meaning is conveyed by prose. 

…Sometimes it’s only in the process of writing that you discover your original ideas. Some might say that the output of large language models doesn’t look all that different from a human writer’s first draft, but, again, I think this is a superficial resemblance. Your first draft isn’t an unoriginal idea expressed clearly; it’s an original idea expressed poorly, and it is accompanied by your amorphous dissatisfaction, your awareness of the distance between what it says and what you want it to say. That’s what directs you during rewriting, and that’s one of the things lacking when you start with text generated by an A.I.”

For the purposes of this essay, I told ChatGPT to write me a novel. After a quick and somewhat disingenuous-sounding admission that the task might be “challenging,” it proceeded to spit out bland, middle-school-reading-level fantasy boilerplate about “a young woman with latent magical abilities, struggling to come to terms with her past and her destiny.” I had to peel my eyelids open just to read the first few paragraphs, which any fifth-grader could have exceeded in inventiveness and style without even trying.

Photo by Alina Grubnyak on Unsplash

Even putting aside my visceral distaste for bad writing, I struggle to see the point of this at all. We have plenty of writers and creative fifth-graders who can tell us stories many orders of magnitude better than this. Like Mr. Bohlen in Dahl’s story, I don’t see why this needs to exist, except as a way for people to disconnect themselves from the uncomfortable experience of becoming a better writer, or as a way to exploit authors, or as a means of pouring sludge into the inner workings of culture because they’ve made some kind of Faustian bargain with Big Tech. At least in Dahl’s story the contract promised writers a lifetime payout. Tech giants like OpenAI are acting as though every writer has already signed such a contract, but because they’re not compensating them, they don’t even offer the writers a chance to sell their soul. They simply take it from them.

Just like the world is its own best model, humans who enjoy stories are our best source of new stories. No one needs garbled artbarf. And we don’t have to accept that there’s no problem with having a layer of distance between us and the creative process. This remains true even when we acknowledge that imitation is a crucial first step in developing creative mastery. In the words of Oliver Sacks:

“All of us, to some extent, borrow from others, from the culture around us. Ideas are in the air, and we may appropriate, often without realizing, the phrases and language of the times. We borrow language itself; we did not invent it. We found it, we grew up into it, though we may use it, interpret it, in very individual ways. What is at issue is not the fact of ‘borrowing’ or ‘imitating,’ of being ‘derivative,’ being ‘influenced,’ but what one does with what is borrowed or imitated or derived; how deeply one assimilates it, takes it into oneself, compounds it with one’s own experiences and thoughts and feelings, places it in relation to oneself, and expresses it in a new way, one’s own.”

This is fundamentally different from the kind of imitation that generative AI enables, which compresses its data into “lossy” copies of existing information, rather than synthesizing and understanding it, the way we can apprehend the principle that underlies how to do addition rather than memorizing all possible sums. As Ted Chiang puts it in his New Yorker piece, “There’s nothing magical or mystical about writing, but it involves more than placing an existing document on an unreliable photocopier and pressing the Print button.” 

So if you’re just a regular person who enjoys literature and the craft of writing – someone who believes that stories aren’t “just another product” like chairs and bricks – what are you supposed to do in the face of all this? Unlike the Great Automatic Grammatizator, our AI tools aren’t simply the sleazy product of two corrupt guys who want to make a buck; they’re much more pervasive and are taking our society down some disturbing paths. This includes AI companies’ reliance on the traumatizing and underpaid work of content moderators who scrub away the bad parts of the internet so we can use AI tools safely, the contributions of data centers to climate-damaging emissions, and AI’s well-known penchant for recycling human biases (which has been shown to work both ways, influencing humans to be more biased after using an AI tool). 

Those are all good reasons to be not merely skeptical, but actively interrogative and confrontational when it comes to letting tools like this into our lives. When Mr. Bohlen asks, “Who on earth wants a machine for writing stories?” it should remind us to think hard about who benefits from the existence of such machines. As the writer Chuck Wendig (of “artbarf” fame) says, “Generative AI empowers not the artist, not the writer, but the tech industry.” Anyone who values the written word and its meaning as part of a life well-lived should keep this financial conflict of interest behind all AI tools at the top of their mind. This is especially important to remember if you share your own writing with programs like ChatGPT – unless you explicitly opt-out, anything you share will be used by OpenAI to train future versions of the AI. (And with some instances of AI, you can’t even opt-out.)

Photo by randa marzouk on Unsplash

Furthermore, not only do we not need machines to write stories for us, we can reject the entire premise that what humans do is not that special or unique. The linguist and AI critic Emily Bender is an encouraging voice of reason and common sense when it comes to what our tools can, and should, do for us and how that differs from what we can do:

“We are not parrots. We do not just probabilistically spit out words. ‘This is one of the moves that turn up ridiculously frequently. People saying, “Well, people are just stochastic parrots,”’ she [Bender] said. ‘People want to believe so badly that these language models are actually intelligent that they’re willing to take themselves as a point of reference and devalue that to match what the language model can do.’”

This stance is not anti-science; in fact, it shows a deeper appreciation for the science of how the human mind works and how language works, something that the AI cheerleaders haven’t earned the right to dismiss on our behalf. Why are we letting the Adolph Knipes of the world tell us what our minds and cultural systems are capable of? Why are we accepting lossy JPEGs of our culture as the real deal? And why are we trading in our story-making abilities for stochastic parrot droppings?

Additionally, we should remember the old adage that someone climbing successively taller trees can plausibly claim to be making progress in reaching the moon – at least for a while. In other words, getting to a point where chatbots can write the kinds of novels that enrich our culture is not a matter of scale; it would require a completely different type of infrastructure, similar to the invention of rocket fuel in comparison to ladders. As of right now, all we have are AI companies gluing ladders together and excitedly telling us it’s only a matter of time before we reach the moon. It’s not just that these tools are bad at their jobs, which implies that they might eventually get good enough; it’s that they’re entirely the wrong tools for the task. 

Writing anything interesting requires creative rocket fuel, not a ladder made of better materials. And we have no reason to believe that AI models will eventually discover that rocket fuel. As cognitive scientist Gary Marcus explains, we may already have reached the point of diminishing returns when it comes to AI’s truthfulness, reasoning, and common sense. That is, making larger and larger models is no longer getting us closer to improving these measures. “The measures that have scaled have not captured what we desperately need to improve: genuine comprehension,” Marcus says.

Photo by Mike Erskine on Unsplash

On the whole, the many aesthetic, ethical, and humanistic arguments for resisting the encroachment of AI into the creation of art – and writing specifically – are compelling and important. But I personally take a less serious, and very human, raised-middle-finger glee from the approach in Wendig’s sweary blog post

“Let them push buttons and have robots tell stories to feed to other robots. We humans can all stay far the fuck away from it. We can gather around the campfire and tell our stories to each other.”

Enjoying the content on 3QD? Help keep us going by donating now.