by Joseph Shieber
In a recent short post on “ChatGPT and My Career Trajectory,” the prominent blogger, public intellectual, and GMU economist Tyler Cowen sees AI as posing a threat to the future of public intellectuals. (For what it’s worth, Michael Orthofer, the writer of the excellent Complete Review book review website, seems to agree.)
Cowen writes:
For any given output, I suspect fewer people will read my work. You don’t have to think the GPTs can copy me, but at the very least lots of potential readers will be playing around with GPT in lieu of doing other things, including reading me. After all, I already would prefer to “read GPT” than to read most of you. …
Well-known, established writers will be able to “ride it out” for long enough, if they so choose. There are enough other older people who still care what they think, as named individuals, and that will not change until an entire generational turnover has taken place. …
Today, those who learn how to use GPT and related products will be significantly more productive. They will lead integrated small teams to produce the next influential “big thing” in learning and also in media.
I share Cowen’s sense that intellectuals (public or not) shouldn’t ignore the rapidly ever-more-sophisticated forms of AI, including ChatGPT. However, I’m not sure that Cowen is right to suggest that AI output will supplant human output – particularly if he’s making the stronger, normative claim that such a development is actually commendable.
There seem to be three reasons to interact with ChatGPT, all of which can be teased out from Cowen’s comments. First, you could treat ChatGPT as a content creator. Second, you could treat ChatGPT as a facilitator for your own content creation. Finally, you could treat ChatGPT as an interlocutor. (Of course, these ways of interacting with ChatGPT are not mutually exclusive.)
Let’s deal with these ways of interacting with ChatGPT in order.
Consider interacting with ChatGPT in its function as a content creator. Here again, there would seem to be multiple ways in which you might use ChatGPT. Significantly, you could use ChatGPT to create pastiches of the work of a content creator whose work you admire — to create new “James Joyce” novels, for example, or new “August Wilson” plays. Or, alternatively, you could treat ChatGPT as a content creator in its own right, and simply ask it to produce a novel or play for you — perhaps on a particular theme.
Would either of these ways of interacting with ChatGPT as a content creator vindicate Cowen’s suggestion that we’ll soon spend less time consuming the work of human content creators? I don’t see it.
Consider the case in which you’re using ChatGPT to create pastiches of a favorite author’s work. Here again there are two possibilities. Either those pastiches are inferior to that author’s work, or they’re equal to (or superior than) that author’s work. In the first case, it seems pretty clear to me that there’s no reason to prefer ChatGPT over the original. The obvious challenge would be posed by the second possibility — that ChatGPT could create work equal to or superior than an author’s extant body of work.
Let’s grant, for the sake of argument, both that it will eventually be possible for ChatGPT — or successor AIs — to create work that equals or surpasses that of its human models, giving us new “Borges” short stories or “Bach” cantatas, and that we have a clear understanding of what it would mean for any new work to equal or surpass an existing masterpiece. Even granting these points, does it then follow that Cowen is correct in thinking that the rise of ChatGPT will bring with it the fall of human content creators? I still don’t think so.
To see this, consider first the related case of forgeries in the art world. It doesn’t seem to me in principle impossible that forgers could create pieces that surpass the artistry of the Old Masters. However, when people flock to art museums, they want reassurances that they’re viewing genuine works, rather than forgeries.
The broader lesson from this is twofold. First, part of the reason why we appreciate great works of art is that we appreciate the genius of the *human* who created that art. Just as there would be little interest in watching the World Cup of robot soccer, there might be less interest in appreciating a work of art, however technically accomplished, if it was not a human achievement.
The second reason is that artworks are not simply self-contained, but exist within a historical context. Joyce’s Ulysses is embedded in a particular conversation with the classical tradition and the Irish political context that molded Joyce. Furthermore, much subsequent work by later novelists (Gaddis, Pynchon, Foster Wallace) is a reaction to Ulysses. A new “James Joyce” work would be devoid of this context and, for that reason, much less interesting.
Both of these points speak equally well against the notion that using ChatGPT to create original content, rather than pastiches, would pose a significant danger to human content creators.
If anything, these points apply even more forcefully in the case of nonfiction. I want to be stimulated by the views of Cowen or Cobb or Charen, not “Cowen” or “Cobb” or “Charen”. And even when future iterations of ChatGPT or other AI fixes its current penchant for bullshitting, it will still at best present a consensus view of expert opinion, like a good textbook or encyclopedia. Textbooks or encyclopedias, however, don’t pose a threat to specialist works!
Cowen’s point about the disruptiveness of ChatGPT seems much stronger when considering ChatGPT as a facilitator of content creation. I agree that those who find ways to take advantage of the AI’s strengths will have an advantage in creating new and better content, and that the appropriate response on the part of educators is to familiarize themselves with these new AI tools, to help their students to use those tools in ways that will improve their own lives, rather than to attempt to bar the use of AIs on the part of students.
Those who fear what AIs like ChatGPT mean for writing education make the same mistake, it seems to me, that educators of a previous generation made when fearing the use of calculators on the part of math students.
The point of learning to write is that learning to write is learning to think. It is impossible — at least for most people — to develop a complex, multi-part argument without writing. Not only would it not be possible to keep track of the steps of the argument without writing, but writing is also essential in actually working out the steps of the argument in the first place.
For this reason, it doesn’t matter how well ChatGPT can formulate an argument. Rather, it will still be necessary for students to formulate those arguments themselves — as a way of learning how to think. The analogy here to calculators should be obvious. My 6th grade son’s graphing calculator can do matrix arithmetic, graph equations, and solve integrals, but my son will still have to learn to do all of those tasks himself if he wants to achieve a deeper understanding of math.
The considerations speaking to the value of ChatGPT as an interlocutor initially seem similar to those arguing for the strengths of ChatGPT as a facilitator of content creation. However, there would seem to be two considerations pushing me to be less sanguine about the values of ChatGPT as an interlocutor. The first is that much of what I want from interacting with others is human connection, quite apart from any particular intellectual stimulation.
The second consideration is that, even when you consider the intellectual stimulation to be gained from interacting with others, it’s unclear that ChatGPT has an advantage. Here I may be revealing my age as one of Cowen’s “older people who still care what [well-known, established writers] think, as named individuals.” However, if I’m going to choose a non-living interlocutor, why would I choose a Large Language Model – essentially a sophisticated search algorithm performed on an astronomically large database – when I could choose one of the great thinkers of the past?
I love the W.E.B. DuBois quote from Chapter VI of The Souls of Black Folk, in which DuBois celebrates his freedom to congregate with the pinnacle of human thought through books:
I sit with Shakespeare and he winces not. Across the color line I move arm in arm with Balzac and Dumas, where smiling men and welcoming women glide in gilded halls. From out the eaves of evening that swing between the strong-limbed earth and the tracery of the stars, I summon Aristotle and Aurelius and what soul I will, and they come all graciously with no scorn nor condescension. So, wed with Truth, I dwell above the Veil.
I’ve been treating Cowen’s observations about the effects of ChatGPT on public intellectuals like himself as normative, assessing whether you ought to spend ever more time with ChatGPT or its successors rather than with the work of human content creators. It’s possible, however, that as a purely predictive claim Cowen is correct, and that people will – simply as a matter of fact – spend more time with AI-produced content.
Such a development is indeed quite possible. If so, however, then while you’re chatting with an AI I’ll be whiling away my time “where smiling men and welcoming women glide in gilded halls,” summoning my interlocutors – including DuBois himself – “out of the eaves of evening that swing between the strong-limbed earth and the tracery of stars.” I’m quite confident about which of us will be engaged in the more worthwhile pursuit.