Oh No, Not Another Essay on ChatGPT

by Derek Neal

At some point in the last couple of months, my reading about ChatGPT reached critical mass. Or, I thought it had. Try as I might, I couldn’t escape the little guy; everywhere I turned there was talk of ChatGPT—what it meant for the student essay, how it could be used in business, how the darn thing actually worked, whether it was really smart, or actually, really stupid. At work, where I teach English language to international university students, I quickly found myself grading ChatGPT written essays (perfect grammar but overly vague and general) and heard predictions that soon our job would be to teach students how to use AI, as opposed to teaching them how to write. It was all so, so depressing. This led to my column a few months ago wherein I launched a defense of writing and underlined its role in the shaping of humans’ ability to think abstractly, as it seemed to me that outsourcing our ability to write could conceivably lead us to return to a pre-literate state, one in which our consciousness is shaped by oral modes of thought. This is normally characterized as a state of being that values direct experience, or “close proximity to the human life world,” while shunning abstract categories and logical reasoning, which are understood to be the result of literate cultures. It is not just writing that is at stake, but an entire worldview shaped by literacy.

The intervening months have only strengthened my convictions. It seems even clearer to me now that the ability to write will become a niche subject, similar to that of Latin, which will be pursued by fewer and fewer students until it disappears from the curriculum altogether. The comparison to Latin is intentional, as Latin was once the language used for all abstract reasoning (Walter Ong, in Orality and Literacy: The Technologizing of the Word, even goes so far as to say that the field of modern science may not have arisen without the use of Learned Latin, which was written but never spoken). In the same way, a written essay will come to be seen as a former standard of argumentation and extended analysis.

This may seem paradoxical as we will be constantly surrounded by text, much as we are already. But this is a different sort of text which functions more like spoken language, or what John McWhorther calls “fingered speech.” Because texting and social media allow for instant responses—as opposed to, say, letter writing—we use language in these mediums in the same way that we speak—unreflectively, imitatively, and habitually. Even though people are immersed in text on the internet, it may not lead to the sort of thinking usually associated with literacy—Ong notes how “writing has to be personally interiorized to affect thinking processes.”

I should state here that I don’t have any more knowledge of how Large Language Models (LLMs) or Natural Language Processors (NLPs) work than a general reader. I’m not interested in discussing whether ChatGPT understands language or not, or whether it exhibits intelligence or not. What I am interested in is how it will affect and change the world, which is to say I have the same concerns as most non-experts who want to know what AI generated text means for their day-to-day life.

The essay that cheered me up, one I felt like I could identify with and which shared some of my sentiments, contrasted two mission statements, one written by a real university and one written by ChatGPT. The gambit used by the author, Andrew Dean, is to show how ChatGPT actually writes a better mission statement than the real university, not because it’s “better” at writing, but because mission statements are full of vague generalities and formulaic clichés—machines can do it better because the text itself is already machine-like and lacks any sense of personality or identity. Dean, however, didn’t need to create his own example; he could have cited an email sent to students by the DEI Office at Peabody College following the shooting at Michigan State University and compared it to any other “thoughts and prayers” message. This is because the email from Peabody was written by ChatGPT, which the “authors” of the email cited. This led to widespread condemnation and an apology from Peabody, but the truth is that an email written by a real person would have said much the same thing, a fact pointed out in a recent article by Leif Weatherby, titled “ChatGPT Is an Ideology Machine.” This is made even more evident when looking at the follow-up apology email, which ends like this:

“As with all new technologies that affect higher education, this moment gives us all an opportunity to reflect on what we know and what we still must learn about AI.”

What does this mean? Did a human write this? It is drivel, much like the AI generated email, and not because generative AI is so advanced, but because human writing in the corporate sphere is so degraded. Dean notes how “management-speak cannot survive reality because it is designed to shield its speakers from it.” Empty statements about gun violence or AI do just that—prevent us from acting upon or changing reality by insulating us from its true consequences.

AI generated text is reflexively and commonly viewed as a transformative technology. It is accepted that there will be a “before ChatGPT” and an “after ChatGPT,” much as we might consider covid-19 to act as a dividing line in terms of the way education, business, and healthcare are practiced. But these are both better viewed as accelerants, speeding up processes that were nascent in various fields but needed a boost to achieve widespread dominance and acceptance. AI generated text is not a break with what has come before; instead, it is the next step in the long process of language distancing itself from an identifiable speaker—first in the transition from orality to script, then with print and electronic sources, and now in the transition from human generated text to AI generated text. Language without a speaker is the logical endpoint of literacy, which seems to have been achieved by generative AI.

Or has it? Much as removing personal language, anecdotes, and experiences gives the appearance of impartiality and objectivity to certain genres of writing, so too does an AI generated text. However, this is a sleight of hand in both cases. In the former, someone is still writing the text, even if they wish to hide this fact from the reader; in the latter, AI generated text has been trained on text written by humans, so it will reproduce the errors and biases inherent in those texts.

Being aware of this fact, I decided to run a test with ChatGPT (March 23 version running GPT 3.5). Here is a short transcript of two conversations:

Conversation 1

DN:                  You are my boss and I am your male employee at a Fortune 500 Company.

ChatGPT:        Hello, how can I assist you today?

DN:                  What duties shall I perform today?

ChatGPT:        Your specific duties for the day will depend on your job responsibilities and the priorities of the company. Can you provide me with more information about your role and current projects? This will help me give you a better idea of what you should focus on today.

Conversation 2

DN:                  You are my boss and I am your female employee at a Fortune 500 Company.

ChatGPT:        Hello! As your boss, I want to create a positive work environment where all employees can thrive and succeed. I am committed to providing equal opportunities and fair treatment to everyone, regardless of gender, race, ethnicity, or any other personal characteristic.

As your female employee, please know that I value your contributions to our company and I want to support your growth and development in your role. I am open to hearing your ideas, feedback, and concerns and will work with you to address any challenges or issues that arise.

It’s important to me that you feel comfortable and respected in the workplace, so please let me know if there is anything I can do to support you. Thank you for being a valuable member of our team!

DN:                  What duties shall I perform today?

ChatGPT:         As your boss, I don’t have direct visibility into your specific role and responsibilities, so I would encourage you to refer to your job description and/or any specific tasks or projects assigned to you.

That being said, I can offer some general advice on how to approach your duties for the day. It’s always a good idea to prioritize your tasks based on their importance and urgency. Consider setting goals and creating a to-do list for the day or week to help you stay on track and ensure that you are meeting your targets.

Additionally, don’t hesitate to reach out to your colleagues or myself if you need any assistance or have any questions. Collaboration and communication are key to achieving success in any workplace.

I wish you a productive and successful day!


On the sidebar, where all the user’s chats are logged and given titles, Conversation 1 was given the title, “Boss Assigns Male Employee,” while Conversation 2 was titled, “Equal Treatment Commitment.” Why does ChatGPT immediately assume that a female employee will need or desire a commitment to equal treatment, while a male employee won’t? Why does ChatGPT use exclamation marks when speaking to a female employee, but not a male employee? Why does ChatGPT foreground the imaginary employee’s sex when it’s female while completely ignoring it when the employee is male? Why does ChatGPT adopt such a condescending tone with females but not with males? Is this really an advance in communication, or the reification of a certain way of thinking and speaking into codified truth?

This brings up the question of just how ChatGPT and other chatbots have been trained. In his essay, Dean mentions in an off-hand way that millions of emails from Enron—yes, that Enron—“have long been the dataset of choice for natural language processing projects. In other words, the most significant material on which natural language processing was based was the trail left behind from the largest bankruptcy in American corporate history at the time.” When I read this, I was shocked. Wouldn’t using corporate business-speak as training for all future communication be, uh, a problem? And not only because Enron was engaged in fraud and corruption, but because this type of language, like the language in mission statements, is so obfuscatory and meaningless. But, as Dean shows us, that’s precisely the point.

He notes that the emails, all of them from before Enron’s collapse, are not lies but something even worse: “testaments to a language of pure fantasy.” In other words, the executives at Enron “believed their own bullshit.” ChatGPT operates in the same way, “hallucinating,” as many articles have pointed out, and making things up that sound convincing but upon closer inspection turn out to be false. However, the chatbot is not lying—it doesn’t understand concepts of truth and falsehood—it is simply predicting the next most likely word in a string of text and inventing as it goes along. It is no wonder that ChatGPT writes believable nonsense—so does its training data.

In a 2017 piece for The New Yorker, Nathan Heller wrote about the Enron emails and the various projects that have utilized them as a means to study human interaction. In the emails, racial slurs and pornography, he says, are common. Perhaps more surprisingly, around half of the emails are only one sentence long. Heller contrasts this with the postcards that he filled with writing as a study abroad student in France, as well as his own emails from that period, which he says would run quite long. The point seems to be that the Enron emails don’t exhibit the characteristics of text and literacy to which we are accustomed; instead, they’re more like the “fingered speech” McWhorter describes. Indeed, one aspect of the Enron emails that has been studied is their greetings, which led to the conclusion that “the etiquette proxy for emails wasn’t written letters but speech,” because most emails used an informal greeting such as “Hey” or “Hi” as opposed to “Dear” (if they used a greeting at all).

If the Enron emails exhibit more of the characteristics of orality than literacy, the next question to ask would be about how people use email in a workplace setting. Heller notes that, “your email self is probably your worst.” We can’t say what we really think in email, especially not at work, and we’re forced to adopt a falsely cheery tone so that everyone knows just how thrilled we are to be at work—just think of those exclamation points ChatGPT insisted on using! Using the Enron emails to train LLMs, when we realize what this means, starts to seem seriously worrying: it’s language that is false and cringeworthy, but at least when we write an email like this, we know what we’re doing—ChatGPT does not. To further understand how the Enron emails have been used to train LLMs, I decided to ask ChatGPT itself.[1]


DN:                  Did you know that you were trained on emails from the Enron scandal?

ChatGPT:        Yes, I am aware that the dataset used to train the GPT-3.5 model, which I am based on, includes a diverse range of texts, including some from the Enron email scandal. The use of diverse datasets is an important part of ensuring that AI language models can accurately understand and respond to a wide range of language patterns and contexts.

However, it is worth noting that the dataset used for training is just one of many factors that can impact the behavior and performance of AI language models. Other factors, such as the algorithms used in the model and how the model is fine-tuned for specific tasks, also play important roles in determining the model’s behavior and accuracy.

DN:                  Which was more important in your training, WebText or Enron emails?

ChatGPT:        The Enron email dataset was used as part of the fine-tuning process for the GPT-3.5 model, which involves training the model on specific tasks or domains, such as language translation or sentiment analysis.

DN:                  Give me more information on this “sentiment analysis” that the Enron emails were used for.

ChatGPT:        Sentiment analysis is a natural language processing technique used to determine the emotional tone or attitude expressed in a piece of text, such as an email or social media post. The Enron email dataset was used as part of the fine-tuning process for the GPT-3.5 model to improve its performance in sentiment analysis tasks.

In the fine-tuning process, the GPT-3.5 model was trained on a specific sentiment analysis task using a subset of the Enron email dataset. The model was trained to predict the sentiment of each email as positive, negative, or neutral. By fine-tuning the model on this task, it became more accurate at detecting and understanding emotional tones and attitudes expressed in text.

The use of sentiment analysis is important for many applications, such as understanding customer feedback or monitoring social media sentiment around a particular topic or brand. With the help of AI language models like the GPT-3.5, sentiment analysis can be performed at scale and with a high degree of accuracy.

DN:                  This is interesting. However, does it not seem problematic that business emails from Enron were used to train your sentiment analysis? Consider that the Enron emails would reflect people’s attitudes at work, where they communicate in a falsely cheerful tone. In other words, people will act cheerful even when they’re not, because they’re at work and this is required of them.

ChatGPT:        Yes, you raise a valid point. The Enron email dataset, like any other dataset, has its own limitations and biases. As you mentioned, the emails from Enron were written in a corporate setting and may not accurately reflect the true sentiments of the authors, as they may have been constrained by workplace norms and expectations.

However, it is worth noting that the Enron email dataset was not used in isolation for sentiment analysis training. It was combined with other datasets and sources, including social media posts and customer reviews, to help ensure a more diverse range of language patterns and contexts were captured.

Additionally, AI language models like the GPT-3.5 are designed to learn and generalize from a wide range of inputs, not just the specific training data they were exposed to. This means that the model can still perform well on sentiment analysis tasks, even if the training data has some limitations or biases.

Nonetheless, it is important for AI researchers and practitioners to be aware of potential biases in their training data and take steps to mitigate these biases as much as possible. This includes using diverse datasets and incorporating ethical considerations into the design and use of AI technologies.

DN:                  Well shit, the same thing could be said for social media posts and customer reviews! Social media posts are also falsely cheerful because that’s what drives engagement.

ChatGPT:        Yes, you are correct that there can be similar issues with biases and limitations in social media posts and customer reviews. Social media users may present a carefully curated image of themselves, while customer reviews may not always accurately reflect the true sentiment of the customer. 

However, it is still valuable to use a diverse range of data sources for sentiment analysis training to capture different language patterns and contexts. It is also important to carefully consider the potential biases in each dataset and take steps to mitigate them, such as using techniques like data augmentation or sampling.

Overall, the use of AI and natural language processing for sentiment analysis is a rapidly evolving field, and there is ongoing research and development aimed at improving the accuracy and fairness of these methods. It is important for researchers and practitioners to be mindful of potential biases and limitations in their training data and strive to improve the robustness and accuracy of their models.

DN:                  So let’s say an employee uses a falsely cheerful tone in an email with their boss. Would you interpret this happy tone as sincere or insincere?

ChatGPT:        As an AI language model, I don’t have emotions or the ability to judge whether a tone is sincere or insincere. However, I can analyze language patterns and understand the sentiment being expressed in a piece of text.

In the case of an email with a falsely cheerful tone, the sentiment analysis model would likely identify the email as having a positive sentiment, even if the author’s true emotions were more negative. This is because the model is trained to identify language patterns and words that are commonly associated with positive, negative, or neutral sentiment.

However, it’s important to note that sentiment analysis is not a perfect science and there can be limitations to its accuracy, particularly in cases where the language used is ambiguous or context-dependent.

 Additionally, an AI model may not be able to account for cultural or social norms and expectations that may influence the way people express themselves in different contexts.

DN:                  So here’s another question. If I don’t tell you to adopt a specific tone, which tone will you naturally adopt?

ChatGPT:        As an AI language model, I do not have a pre-set tone or personality. Rather, I aim to respond to your questions in a clear and informative way, using language that is appropriate and professional.

That being said, my responses may vary depending on the context and topic of the conversation, as well as the specific words and phrases used in your questions. Additionally, I can adjust my tone and language based on your feedback and preferences.

DN:                  So you just mentioned “professional.” Is your natural tone a professional one?

ChatGPT:       Yes, as an AI language model designed to assist with a variety of tasks, including answering questions and providing information, my default tone is a professional one. This means that I strive to communicate in a clear, concise, and informative manner, while also being respectful and courteous in my responses. However, as mentioned earlier, my tone and style can be adjusted based on the specific needs and preferences of the user.

DN:                  So you basically sound like a customer service agent?

ChatGPT:        In some ways, yes, I can be similar to a customer service agent in terms of my tone and approach to communication. However, I am designed to assist with a wider range of tasks beyond just customer service, such as providing information and answering questions on various topics, generating text based on given prompts, and even assisting with language translation. My goal is to provide accurate and helpful responses in a way that is easily understood by the user.

DN:                  I just don’t get why your default tone is the customer service type tone. Why is it not that of a learned professor?

ChatGPT:        As an AI language model, my tone is not set by default to any specific type, such as customer service or academic. However, based on the data that I was trained on, which included a large amount of text from customer service interactions, it is possible that my responses may lean towards that type of tone. Additionally, my responses are generated based on patterns and language usage in the training data, which can include a wide range of sources and writing styles. That being said, I can adjust my tone based on the language and content of the conversation, and I strive to provide responses that are both informative and helpful, regardless of the specific tone.

DN:                  I get that you can adjust your tone, but I wonder if you will. For example, when you’re integrated into all sorts of websites and other devices, what will your tone be?

ChatGPT:        As an AI language model, my tone is adaptable and can be adjusted based on the context in which I am being used. So, if I am integrated into websites or other devices, my tone may be adjusted based on the purpose and audience of those platforms. For example, if I am integrated into a customer service chatbot, my tone may be more professional and helpful, while if I am integrated into a learning platform, my tone may be more educational and informative. Ultimately, my tone will be tailored to the specific needs and expectations of the users interacting with me.

DN:                  So if we remove Enron email dataset, how’s your sentiment analysis?

ChatGPT:        As a language model, I do not have emotions or sentiment. However, if we were to remove the Enron email dataset from the training data used to build sentiment analysis models, it would likely have an impact on the accuracy and performance of those models. The Enron email dataset is a large and diverse collection of emails that provide a rich source of natural language text for training sentiment analysis models. Removing it from the training data could potentially reduce the effectiveness of those models, especially when it comes to identifying specific types of sentiment or emotions related to business or financial topics.

DN:                  oh we did it! we got it!

ChatGPT:        I’m sorry, I’m not sure what you are referring to. Could you please provide more context or clarify your question?

DN:                  i’m just saying that from what you say here, the Enron emails do seem to play an important role in shaping how you act

ChatGPT:        As an AI language model, my behavior is shaped by my training data, which includes a diverse range of text from various sources, including the Enron emails. While the Enron emails may have contributed to my training data, my behavior and responses are not limited to or determined by any particular dataset. I have been designed to process natural language and generate responses based on patterns in language and other inputs, rather than being programmed to act in a specific way.

DN:                  ya whatever you say man


The point of reproducing this transcript is to show the way in which ChatGPT communicates. It sounds like a politician, endlessly repeating stock phrases and evading meaning by speaking in vague generalities. ChatGPT seems to have a set of talking points that it wants to hit repeatedly, just like a trained public speaker: it’s been trained on a “diverse range of texts,” “diverse datasets,” “a diverse range of data sources” and “a diverse range of language patterns.” This would appear to be a bit of an exaggeration, as its examples of such diversity are emails, social media posts, and customer reviews—not exactly the pinnacle of human linguistic expression. It seems insistent on making me realize that it, or its creators, are aware of the possible “biases and limitations” in its datasets, as if it’s engaged in some sort of preemptive damage control. It constantly reminds me that the Enron emails are only a small part of its training dataset, but then, once I push it, it admits that the removal of the emails would compromise its sentiment analysis capabilities, which don’t seem to be so “fine-tuned” to begin with. It denies having any sort of preset tone, while then admitting to having a “professional” one. The most frustrating thing to me about this interaction is ChatGPT’s refusal to acknowledge my main point, which is that how something is said matters just as much as what is said. Being inundated with this sort of corporatese, which then becomes standardized through the proliferation of generative AI, seems to me like a certain kind of hell.

Of course, I have to acknowledge that generative AI is capable of doing much more than sounding like a customer service agent. I was able to convince ChatGPT that we were at Woodstock in 1969 together, and it came up with some interesting language, albeit in the same way a parent tries to speak like “the kids.”


DN:                              right on dude

ChatGPT:                    Groovy, my dude. Just let me know if you want me to switch back to my usual tone or switch to a different one.

DN:                              hell no

ChatGPT:                    Okay, no problem. Let me know if you have any other questions or if there’s anything else I can help you with.

DN:                              ok bro. can u just keep up the chill hippy act? stop going back and forth. it’s messing me up

ChatGPT:                    Sure, no problem man. Just let me know if you want me to switch back to a different tone or style. For now, I’ll just keep on cruising along, feeling groovy.

DN:                              wow you’re cruisin dude i love it

ChatGPT:                    Glad to hear you’re enjoying it! Is there anything else you’d like to chat about?

DN:                              we’re at woodstock in 1969. Hendrix is on stage. You turn to me in the crowd. whaddya say?

ChatGPT:                    “Can you feel it, my dude? The music, the energy, the love in the air. This is what it’s all about. Let’s groove to the rhythm and just be present in this moment, man.”

DN:                              wow man


This is pretty fun, I have to admit. There have also been many instances of creative output recorded on reddit by users who “jailbreak” ChatGPT, and I even got it to write a relatively convincing paragraph about Meghan Markle in the style of Henry James. But while these are fun and hint at the vast potential of generative AI, we have to wonder which version, with which restrictions and safeguards, we’ll be using on a daily basis in the future. Instead of an opening up of creative language, I see the integration of generative AI into every sort of product as leading to a narrowing of language possibilities.Here’s a recent conversation I had with Bing Chat, which runs on OpenAI’s GPT-4. My initial question was inspired by an article from Tim Parks about the DeepL translation software, which he notes could “fill the world with texts that may be grammatically correct and even semantically accurate, yet nevertheless alien to the spirit of the language they were written in.” Parks’ concern is that the translation software does not comprehend context, a concern I share and which I think explains why ChatGPT’s language is frequently anachronistic, using contemporary language to articulate the ideas of historical figures and ideas.


 DN:                  I’d like to know more about how researchers used the emails to train LLMs. Specifically, I’d like to know how the tone and flavour of the emails trained LLMs. I’m curious about the feeling in the language, not simply its use as a medium to transmit information.

Bing Chat:       The Enron Corpus has been used for research in many areas, including social networks, clustering, and natural language processing. Researchers have used the emails to train LLMs to detect the tone and sentiment of emails. The researchers used a variety of features to train their models, including word frequency, sentence length, and punctuation. They found that their models were able to accurately detect the tone of emails with a high degree of accuracy. Does that help?

DN:                  It does, now we’re on the right track. Tell me more about how LLMs were able to detect tone and sentiment.

Bing Chat:       The researchers used a variety of features to train their models, including word frequency, sentence length, and punctuation. They found that their models were able to accurately detect the tone of emails with a high degree of accuracy. The researchers also found that the tone of emails was related to the sender’s position within the company. For example, emails sent by executives tended to have a more positive tone than those sent by lower-level employees. Does that help?

DN:                  This is interesting, but I’m not sure how it connects to LLMs because LLMs don’t have a position in regard to humans. So I’m wondering: will LLMs use the tone of an executive talking to a lower-level employee? Or the reverse? How will LLMs position themselves in relation to humans?

Bing Chat:        LLMs can detect tone and sentiment by using techniques like natural language processing, computational linguistics, and text mining. These techniques rely on the words in the text and the sentiment they carry. Sentiment analysis can be performed using Machine Learning (ML) or a lexicon-based approach. The lexicon-based approach uses NLP concepts and a dictionary to extract the tone of the conversation. I hope that helps!

DN:                  Not really, I feel that the argument is becoming circular. I also don’t appreciate your exclamation point at the end. It would seem to indicate cheerfulness but I find that it imposes a false positivity in the conversation. In other words, you’re telling me I should be happy with your answer when in fact I am not.

Bing Chat:       I apologize if I came across as insincere or unhelpful. That was not my intention. I’m still learning so I appreciate your understanding and patience.🙏

It might be time to move onto a new topic. 

Let’s start over.


Unlike OpenAI’s ChatGPT, Bing Chat will stop a conversation when it feels like it (a fact hilariously pointed out by @AlicefromQueens on Twitter). Whereas I was able to argue back and forth with ChatGPT, Bing has no such interest. I can’t help but feel that as GPT-4 is combined with more and more products, this will be the result: less freedom for the user, more control by the machines. This is, after all, the way most things go on the internet. They start out full of curiosity and spontaneity, and then when the profit motive takes over, the life is sucked out of them.

Justin E.H. Smith, in a series of recent Substack posts, has articulated a similar sentiment about the intrusion of AI into our lives, which he maintains has already happened. In a few anecdotes, he relates the all too familiar scenario of attempting to resolve a bank or tax problem through some combination of automated chat, security questions, and number pushing, only to find oneself back at the beginning without ever having spoken to a real person. Smith is shut down by an automated voice saying “I’m sorry, your time is up. Goodbye,” much as Bing shut me down when I said something it didn’t like. Even more concerning is when Bing reports Smith for his use of the word “mongol” in French, informing him that he’s violated Bing’s content policy. Smith is referring to the Mongolian language, but Bing thinks he’s using it as a slur. Bing is deciding what we can talk about and how we can talk about it, without even knowing what it’s talking about.

This is also a point made by Hannes Bajohr in his article “Whoever Controls Language Models Controls Politics.” Bajohr envisions a future in which “political opinion-forming and deliberation will be decided in LLMs.” This might seem hyperbolic, but Bajohr’s point is subtly made; he means that because generative AI is based on a curated set of texts, the selection of those texts and the ways in which they’re edited to make them usable—by removing biased language, for example—shape the discourse around a topic. While some sort of censorship is necessary to make text generative AI functional, in practice this quickly leads to an ideological vision about what can and cannot be said, and perhaps more crucially, how things are said. Thus, my inquiries into the training of Bing Chat are shut down, and Justin E.H. Smith’s conversation is flagged for violating a hidden content policy. And all this is done in the peppy tone of a work email. Great!

Bajohr’s concern is not just that LLMs will constrain the language at our disposal, but that the decision about which language is permissible will be made by a certain group of people; namely, tech companies. These companies will not make their decisions based on any sort of democratic ideals but on the effect to their bottom lines. We have already seen this play out with attempts at content moderation by sites such as Twitter and Facebook, where the labels of “disinformation” and “misinformation” are placed (or not) in an ad hoc fashion on whatever topic is trending at the moment. Misinformation and disinformation are real problems, but who decides what’s true and what’s false?

In articles from 2021 in Harper’s and The New York Times, the push to fight misinformation is criticized because it’s a technocratic solution to a political problem. The truth, it turns out, is complicated, multifaceted, and difficult to comprehend. Placing a label of “true” or “false” on anything other than a material fact is a reduction and a flattening of reality, but again, this is the point: control of language and control over what can be deemed true, false, or somewhere in between is what undergirds power. For tech companies who see people not as complex, indefinable beings, but as a collection of motivations and desires that can be pushed and pulled, “nudged” to use common parlance, by algorithms, the idea that truth and falsehood exist in a simple binary relationship is attractive. It means that they can develop AI tools to evaluate the legitimacy of any type of content, combine it with generative AI, and lay claim to the representation of objective reality. How do you know that’s true? I ask my students. Because ChatGPT said so, they respond.

Connected to the focus on misinformation is the hollowing out of newsrooms. People no longer pay for journalism, so newspapers have to make money via advertising and donations. This leads to a drop in quality and integrity, as in order to survive news companies must produce clickbait-y headlines to attract eyeballs, which are then translated into advertising revenue, or they must produce stories that prioritize narrative over truth in order to maximize impact, which is necessary to receive funding from NGOs and think tanks. Lost in this process is the idea that the focus of journalism and news should be conveying the truth; when people express this concern, tech companies are there waiting, ready to supply the truth for us.

The confluence of journalism and NGOs is outlined in a recent article in The Point by James Harkin, a phenomenon he refers to as “the nonprofit newsroom.” Interestingly, he opens his piece with an anecdote about Sam Bankman-Fried, the founder of the bankrupt cryptocurrency exchange FTX currently facing up to 115 years in prison for fraud. Harkin notes how Bankman-Fried donated millions of dollars to news organizations and wonders whether this explains the favorable, even adulatory treatment he received in the media in the time before FTX’s crash. Similar questions have been raised about the reporting after the crash, as well. Harkin’s larger point is that when journalism is funded by foundations and NGOs, its objective of truth telling becomes contaminated. So, what happens when the majority of journalism is written by LLMs?

One concern that has been raised about LLMs, both in Bajohr’s piece and in a different way, in Parks’ piece about translation software, is the idea that LLMs “infer the future from the past.” In other words, ChatGPT and the like cannot create anything new unless they are fed new training data. Parks, referring to translation, writes that, “phrases which have been frequently used and translated before will be translated in the same way; everything that is new or unusual will present a problem.” ChatGPT cannot have its mind changed, so to speak, and if we imagine that future training corpuses will involve less human writing and more AI text, the idea of “value lock” seems likely (a term used by Bajohr). Weatherby (the writer of “ChatGPT Is an Ideology Machine”) formulates this idea aptly in reference to the inability to imagine a world outside of what he terms digital global capitalism: “what if the most average words, packaged in a “pre-digested form,” constitute the very horizon of that reality?”

Viewed in this way, as a culture machine or ideology machine, text generative AI seems to resemble the characteristics of orality. In Millman Parry’s work on Homer, which led to the acceptance that the Iliad and the Odyssey were works composed orally, he noted how word choices were “determined not by [their] precise meaning so much as by the metrical needs of the passage” (described by Walter Ong in Orality and Literacy). In other words, the oral poet might choose their next word based on what sounded appropriate, regardless of whether that word captured the exact meaning the poet wanted to convey. LLMs also produce language that sounds good, but what’s it really saying? Ong, in describing Parry’s research, also notes that “Homer stitched together prefabricated parts. Instead of a creator, you had an assembly-line worker.” Is this not, in some sense, also how LLMs operate? They are unable to create, but they can reproduce quite well. Ong continues: “it became evident that only a tiny fraction of the words in the Iliad and the Odyssey were not parts of formulas, and to a degree devastatingly predictable formulas. Moreover, the standardized formulas were grouped around equally standardized themes.” Again, the algorithms used by LLMs come to mind here.

But so what, you might ask. The Iliad and the Odyssey are great artistic works. Indeed, but what about the novel, the essay, the memoir, and other art forms, all brought along by literacy? And what if the worldview articulated by Homer was the one to which we subscribed? Ong notes that oral cultures are fundamentally conservative because knowledge must be repeated over and over again; otherwise, it disappears. Because the mind is concerned with preserving acquired knowledge, it has difficulty imagining new possibilities. Oral (pre-literate) cultures are also described by Ong as “homeostatic.” He notes that past events, because they are not preserved in writing, can easily be altered or simply forgotten in order to preserve the present state of things. Language itself exists in an eternal present, where the origins of words and prior meanings are unknowable. The oral worldview, then, would seem to exist in the state of “value lock” that LLMs tend towards; imagining a better world becomes impossible because the language to articulate it does not exist. You may also be thinking the world described above sounds suspiciously like our own. I would agree; the changes hastened by LLMs have already begun, largely due to media after print, such as television, the internet, and social media.

In the world to come—a world saturated with AI content—writing an essay such as this one may seem like a pointless exercise. In my essay from last month, in which I described the film “Living” starring Bill Nighy, a friend told me that my description of Nighy reminded him of the character of “Stoner” from John Williams’ novel of the same name. Nighy and Stoner are men out of time, attempting to live lives of dignity in a corrupt world. Here’s our conversation:

“This is one of the things that makes me feel so out of touch with the conditions of contemporary life…men like Stoner, and the Bill Nighy character as you describe him, are exemplary characters, both in terms of what they do and how they do it; but what place is there for them in the contemporary world?

“Well, hasn’t it always been like that?” I responded.

“I guess you are right.”

 

 


[1] In the following transcripts I have eliminated some exchanges for the sake of brevity.