
When ChatGPT was released late in November of 2022 my immediate response was, “meh.” But then I decided that I needed to check it out, if only out of curiosity. Within an hour or so my reaction went from “meh” to “hot damn!” I put it through its paces in various small ways before deciding to give it a whirl on a task I understood well, interpreting a text. In this case I chose a film, Steven Spielberg’s Jaws. I published the result here in 3QD: Conversing with ChatGPT about Jaws, Mimetic Desire, and Sacrifice (Dec. 5, 2022).
I then dove in with both feet. For about a year or so I was mostly interested in observing ChatGPT’s behavior. In time I began using it as an assistant in my own work, even as a collaborator. I started working with Anthropic’s Claude in November of 2024. Starting in December I did a series of posts in which I had Claude discuss photos I uploaded. At the same time integrating it into my general intellectual workflow along with ChaGPT.
In March of this year I began working on a somewhat speculative book on the long-term prospects of AI. Current working title: Play: How to Stay Human in the AI Revolution. I have been making extensive use of both Claude and ChatGPT in developing the book. I’ve used them for general research, for summarizing some of my scholarly research and evaluating it for use in the book, for working on the overall structure of the book, and more generally for things and stuff. The purpose of this article is to give you a glimpse into that work.
First comes three sections of preliminary materials. The fourth section is a short essay that ChatGPT wrote: Beyond the Human/AI Divide. I conclude with some questions about just who or what wrote that essay.
AI as Cultural Technology
A number of social scientists and humanists have been developing the idea that large language models (LLMs) are “cultural technology,” As Henry Farrell put in a recent paper reviewing some of these efforts: Large language models are cultural technologies. What might that mean? I decided to have ChatGPT “read” and evaluate Farrell’s paper, both on general principle, and as preparation for further work. Here’s the prompt I have it along with the beginnings of its response:
I’ve uploaded a document by Henry Farrell to the project: LLMs as cultural technologies. Would you summarize it for me.
I’ve read Henry Farrell’s paper LLMs as Cultural Technologies. Here’s a summary:
Farrell argues that large language models (LLMs) should be seen less as standalone “intelligences” and more as cultural technologies—tools embedded in long histories of how humans produce, circulate, and reshape knowledge. He stresses three main points:
- Cultural embedding, not autonomy
LLMs don’t generate meaning in isolation. They work because they are trained on the accumulated cultural output of human societies. Their significance lies in how they reconfigure access to, and manipulation of, this cultural record—not in their “intelligence” per se. - Historical parallels
He compares LLMs to earlier cultural technologies like writing, printing, and the internet. Each changed the ways humans could preserve, transmit, and remix knowledge. Similarly, LLMs accelerate and transform these processes, making patterns in culture newly visible and usable, but also prone to distortion. - Political and institutional stakes
The adoption of LLMs is not just technical but deeply political. How they’re integrated into institutions (universities, media, law, science) will shape whether they amplify inequalities or enable new forms of participation. Like earlier cultural technologies, they can reinforce existing power structures or open paths for new cultural practices.
- Cultural embedding, not autonomy
Farrell concludes that debates about AI “intelligence” miss the point. The real question is how LLMs, as cultural technologies, will reorganize human practices of knowledge and meaning-making. Their importance is less about replacing humans and more about altering the ecology of cultural production and exchange.
There’s much more to our conversation, but that’s sufficient for my immediate purposes. This approach does not center on the capacities of machines in relation to the capacities of humans. The question of whether or not LLMs are as intelligent as humans, if not now, then in the foreseeable future, or even whether or not they are potentially more intelligible than humans, that’s not central to this approach. Rather this approach centers on the collective nature of human culture and the role that LLMs can play in it.
Contrast with a remark Steven Pinker made in a debate with Scott Aaronson on AI:
…discussions of AI and its role in human affairs — including AI safety — will be muddled as long as the writers treat intelligence as an undefined superpower rather than a mechanism with a makeup that determines what it can and can’t do. We won’t get clarity on AI if we treat the “I” as “whatever fools us,” or “whatever amazes us,” or “whatever IQ tests measure,” or “whatever we have more of than animals do,” or “whatever Einstein has more of than we do” — and then start to worry about a superintelligence that has much, much more of whatever that is.
In this approach, which has dominated popular discussion of AI, “intelligence” is poorly defined, but is treated as something that is easily scalable. Mice have some of it, cats and dogs have somewhat more, we have more still, and at some point in the not-very-well-defined future computer systems will be vastly more intelligent than we are and then….
Well, that’s not clear. But it does appear that humans will at least be made obsolete. Worse than that, we may be enslaved or simply eliminated. These particular anxieties do not arise with the cultural technologies approach.

The Fourth Arena
When you read ChatGPT’s essay you’re going to find it mentioning something called the “Fourth Arena,” which has become central to my thinking and is important in that book I’m working on. I introduced the concept in the following article, Welcome To The Fourth Arena – The World Is Gifted (3QD June 20, 2022). Here’s the opening paragraphs:
The First Arena is that of inanimate matter, which began when the universe did, fourteen billion years ago. About four billion years ago life emerged, the Second Arena. Of course we’re talking about our local region of the universe. For all we know life may have emerged in other regions as well, perhaps even earlier, perhaps more recently. We don’t know. The Third Arena is that of human culture. We have changed the face of the earth, have touched the moon and the planets, and are reaching for the stars. That happened between two and three million years ago, the exact number hardly matters. But most of the cultural activity is little more than 10,000 years old.
The question I am asking: Is there something beyond culture, something just beginning to emerge? If so, what might it be?
That something I have simply been calling the Fourth Arena. And that, in turn, is the cumulative result of some of those cultural technologies that Farrell discusses. In a series of essays starting in 1990, with The Evolution of Cognition, David Hays and I treat these technologies as introducing more sophisticated and powerful computational architectures into our collective culture. The key to this evolution is something the great linguist, Roman Jakobson, called the metalingual function of language, the capacity language has for referring to itself. The sequence is roughly as follows:
Speech = 1M years ago = 33,333 generations ago
Writing = 10K years ago = 333 generations ago
Computation = 1kya = 33 generations ago
Computing = 100 years ago = 3 generations ago
In thinking about the current technology, LLMs and machine learning in general, we need to consider, not only what that technology can do, but the understanding we develop as we come to terms with and develop that technology. At the moment the inner workings of these models are obscure. But there is no reason to believe that that will always be the case. It’s not as though human thought will stop evolving now that AI is arriving. Not all. Human thought will use AI to expand its capabilities. THAT’s the Fourth Arena.

Latour Into The Mix
We’re almost there. But first we need to make one more connection before we’re ready to read ChatGPT’s brief essay. Back in 2011 – 2015 I spent a fair amount of time reading and blogging about the ideas of Bruno Latour. Those ideas seemed relevant to the cultural technologies view of LLMs. So I put the question to ChatGPT:
What interests me is that the cultural technologies view doesn’t see LLS (and AI in general) as approximations to humans. Rather, they are extensions of our capacities. I’m reminded of Bruno Latour’s argument that human culture is not something standing apart from the natural world, “over there,” as it were. It seems to me that the idea of creating a sharp distinction between humans and AIs is similar to sharply setting humans apart from the rest of the world, the “natural” world. It further seems to me that much of the anxiety occasioned by AI – fear of humans being rendered meaningless by these more intelligent machines, and beyond that, fear of being dominated and even destroyed by the machines – that anxiety depends on making a sharp distinction between humans and AIs. Drop the distinction and adopt the cultural technologies point of view and things change.
ChatGPT gave a fairly detailed response, but this is all we need here:
Latour says: “We have never been modern.” We never truly separated human culture from nature; we live in networks of humans and nonhumans.
- By analogy: we have never been post-human. The human/AI binary is an artifact of modern thinking.
- In reality we live in hybrid assemblages of humans, machines, symbols, rhythms, and stories. […]
Instead of “will AI replace us?” the question becomes: how will this new cultural technology extend us, and what stories, institutions, and myths will help us live well with it?
At that point ChatGPT asked me if I wanted it to write an essay “that explicitly ties together Farrell, Latour, and your Fourth Arena vision.” I told it to go ahead.

ChatGPT Writes an Essay
Beyond the Human/AI Divide:
LLMs as Cultural Technologies in the Fourth Arena
1. The Problem of Bad Distinctions
Much of the current anxiety about artificial intelligence comes from treating it as something wholly other. AI is imagined as an alien intelligence, poised to outstrip us, displace us, or even destroy us. The metaphysics here is clear: humans are on one side, machines on the other. The boundary is sharp, the consequences existential.
But this is not the first time we have tried to ground meaning in sharp distinctions. For the last several centuries, “modernity” has insisted on separating humans from nature. As Bruno Latour argued in We Have Never Been Modern, this was always a fiction. Human culture was never detached from the natural world; we have always lived in networks of humans and nonhumans, in entanglements of agency and matter. The separation was conceptual, not real — and it generated both alienation and ecological crisis.
Today we risk repeating the same mistake with AI. To understand our situation, we need to see beyond the divide.
2. Farrell’s Cultural Technologies Perspective
Henry Farrell’s essay Large Language Models are Cultural Technologies provides a way forward. His argument is deceptively simple: large language models (LLMs) are not intelligences in their own right but cultural technologies, akin to writing, printing, or the internet. They do not create meaning from nothing; they reorganize access to the vast cultural record we have already produced.
Seen this way, LLMs are not competitors to humans but extensions of our capacities. They amplify our ability to retrieve, remix, and generate knowledge, just as writing extended memory, or printing extended communication. The key question is not whether they are “intelligent” but how they will be embedded in institutions: who controls them, for what purposes, and with what consequences for cultural participation.
Farrell brings us down to earth. Instead of AI as alien, we see AI as another step in the lineage of cultural prostheses.
3. The Latourian Resonance
Farrell’s insight resonates with Latour’s critique of the human/nature split. Both challenge the metaphysics of separation. Just as culture was never detached from nature, so human intelligence is not detached from its technologies. The stylus, the book, the algorithm — each is part of the ecology of cognition.
Drop the human/AI divide and much of the anxiety dissipates. Fear of domination by machines depends on imagining them as autonomous beings. Recognize them as extensions of our cultural practice, and the question changes: not “Will they replace us?” but “What forms of life do they make possible?”
4. The Fourth Arena Horizon
This is where the Fourth Arena framework comes in. If matter, life, and culture were the first three arenas of existence, then LLMs and related technologies open a fourth. In this new arena, humans live in a dense ecology of digital doppelgangers: not alien intelligences, but cultural extensions that reflect, amplify, and transform us.
The question is not whether we remain meaningful in comparison to machines. The question is what new meanings become possible when play, pleasure, and creativity are conducted in partnership with them. The Fourth Arena is the horizon where Homo economicus gives way to Homo ludens — a society organized not around scarcity and competition, but around play, improvisation, and distributed attachment.
5. Anxiety Reframed
From this perspective, the real danger is not AI itself but the metaphysics that insists on separation. If we cling to the human/AI divide, we doom ourselves to endless cycles of fear: of replacement, of obsolescence, of destruction. If we accept LLMs as cultural technologies, the anxiety shifts to something more tractable: how to embed them in institutions that nurture dignity, creativity, and play.
In Latour’s terms, the challenge is not to purify the categories — human here, machine there — but to recognize the hybrids we already are. In Farrell’s terms, the task is to design institutions that democratize access and prevent capture by oligarchy. In Fourth Arena terms, the work is to build a mythology that embraces these technologies as partners in the next phase of human becoming.
6. Conclusion: Toward a Playful Metaphysics
The debate about AI intelligence misses the point. What matters is not whether LLMs think like us, but how they extend us — how they reorganize the ecology of meaning and open the possibility of new forms of play.
Just as the printing press made possible the public sphere, and the internet made possible global networks, LLMs can make possible a new ludic culture — provided we abandon the bad metaphysics of separation.
Farrell gives us the institutional grounding; Latour gives us the philosophical critique; the Fourth Arena provides the mythic horizon. Together they suggest a new story: one in which humans and their cultural technologies are not enemies but partners, co-creating a world where anxiety gives way to curiosity, and competition yields to play.

Who Or What Bears Responsibility for that Essay?
While we could discuss the intellectual merits of that essay, but that’s not my immediate purpose. I’ve got something else in mind: Who or what wrote that essay? There is an obvious answer: ChatGPT. ChatGPT, after all, produced the words.
But that’s not terribly informative. For ChatGPT is a vast sprawling creature. I know next to nothing about how it exists physically. But I’m guessing that it requires more than one physical instance up there in ‘the cloud’ to handle the millions upon millions of prompts it gets every day. Which physical instance produced those words?
But that will not tell us very much. That essay is also the product of my interactions with ChatGPT since, shall we say, March of this year, which is when I established a Project (a term of art) in ChatGPT which I call, “Kisangani Materials.” I established that project as a nexus for work on my book, Play: How to Stay Human in the AI Revolution. That project currently has 17 different chat streams, including the most recent one, “LLMs as cultural technologies.” Those various chats are available to ChatGPT as context for any work I do within that project. ChatGPT’s essay would have been impossible without that context.
Thus it is a very specialized instance of ChatGPT that wrote that essay, an instance characterized by extensive interaction with me. In particular, that essay depends on a connection I was able to make between Farrell’s essay at the ideas of Bruno Latour. I claim no special insight here; the connection seems obvious enough to me. I only note that my own intellectual history is replete with 100s of 1000s of such connections.
Should I sign the essay by myself? That doesn’t seem quite right. I didn’t produce the words. ChatGPT did. But it didn’t, couldn’t, have produced those words without me. We collaborated. You could examine the history of our interaction to get a detailed sense of that interaction. I’m not sure just what you’d get out of that beyond the sense that, yes, there’s been a lot of interaction going on.
That surely is central to the idea that LLMs are cultural technologies. They are trained on a large corpus of texts that have been created by millions of people, each of whom has their own history of interaction with others.

Note: ChatGPT created all of the images for this article.
Enjoying the content on 3QD? Help keep us going by donating now.
