The Large Language Turn: LLMs As A Philosophical Tool

by Jochen Szangolies

The schematic architecture of OpenAI’s GPT models. Image credit: Marxav, CC0, via Wikimedia Commons

There is a widespread feeling that the introduction of the transformer, the technology at the heart of Large Language Models (LLMs) like OpenAI’s various GPT-instances, Meta’s LLaMA or Google’s Gemini, will have a revolutionary impact on our lives not seen since the introduction of the World Wide Web. Transformers may change the way we work (and the kind of work we do), create, and even interact with one another—with each of these coming with visions ranging from the utopian to the apocalyptic.

On the one hand, we might soon outsource large swaths of boring, routine tasks—summarizing large, dry technical documents, writing and checking code for routine tasks. On the other, we might find ourselves out of a job altogether, particularly if that job is mainly focused on text production. Image creation engines allow instantaneous production of increasingly high quality illustrations from a simple description, but plagiarize and threaten the livelihood of artists, designers, and illustrators. Routine interpersonal tasks, such as making appointments or booking travel, might be assigned to virtual assistants, while human interaction gets lost in a mire of unhelpful service chatbots, fake online accounts, and manufactured stories and images.

But besides their social impact, LLMs also represent a unique development that make them highly interesting from a philosophical point of view: for the first time, we have a technology capable of reproducing many feats usually linked to human mental capacities—text production at near-human level, the creation of images or pieces of music, even logical and mathematical reasoning to a certain extent. However, so far, LLMs have mainly served as objects of philosophical inquiry, most notable along the lines of ‘Are they sentient?’ (I don’t think so) and ‘Will they kill us all?’. Here, I want to explore whether, besides being the object of philosophical questions, they also might be able to supply—or suggest—some answers: whether philosophers could use LLMs to elucidate their own field of study.

LLMs are, to many of their uses, what a plane is to flying: the plane achieves the same end as the bird, but by different means. Hence, it provides a testbed for certain assumptions about flight, perhaps bearing them out or refuting them by example. Read more »