Love and Loathing in the Time of ChatGPT

by Ali Minai

Recently, I asked the students in my class whether they had used ChatGPT, the artificially intelligent chatbot recently loosed upon the world by OpenAI. My question was motivated by a vague thought that I might ask them to use the system as a whimsical diversion within an assignment. Somewhat to my surprise, every student had used the system. Perhaps I should not have been surprised given the volume of chatter about ChatGPT on my social media feeds and my own obsessive “playing” with it for some time after being introduced to it. There is clearly something about this critter that has been lacking in all the vaunted AI systems before it. That something, of course, is how well it fits in with the human need to converse. Very few of us want to play chess or Go with AI; generating cute pictures from absurd prompts is interesting but only in a superficial way; and generating truly complex art with AI is still not something that a non-technical person can do easily. But everyone can engage in conversation with a responsive companion – a pocket friend with a seemingly endless supply of occasionally quite interesting things to say. For all that it lives out in the ether somewhere, it seems pretty human. And this, let it be stated at the outset, makes ChatGPT an immense achievement in the field of AI, and truly a harbinger of the future.

What is ChatGPT – Really?

As anyone who has played with ChatGPT knows, it is a system that answers queries and carries on conversations – hence the term chatbot. In this, it is similar to Alexa, Siri, et al. But it is interesting to look a little more closely at how it works because that is key to its strengths and weaknesses.

At the core of ChatGPT is a large language model (LLM) called GPT-3, or Generative Pretrained Transformer version 3. A language model is a system that can make sense of any text input it is given, and based on that, generate appropriate follow-up text, e.g., a completion or continuation. Virtually all successful language models today are neural networks – networks of simple computational elements that are meant to mimic neurons (brain cells), connected to each other like the networks of cells in the brain. These systems process data and adapt their response behavior by changing the strengths of connectivity, or weights, between neurons, again mimicking a learning process in the brain. In the last fifteen years or so, dramatic advances in technology and methodologies has made extremely large neural networks with millions of neurons and tens of billions of weights, possible. These are often arranged in many layers, hence the name deep learning for methods to train such networks. A large language model is one that has a lot of neurons and weights. GPT-3, for example, has hundreds of millions of neurons and 185 billion weights. Even larger language models are in the offing from Google, OpenAI, and others. GPT-3, like most other LLMs, uses a type of neural network called a transformer, accounting for the ‘T’ in its name.

A generative model is one that, given an input, generates non-trivial, meaningful content such as extended text, images, video, etc. GPT-3 generates text in response to prompts. LLMs are trained on extremely large amounts of data – hundreds of billions of documents, often including all of Wikipedia, reddit, and other sources – using some of the world’s biggest supercomputers, which is the reason why LLMs in recent years have come entirely from large corporations or from the highly-funded OpenAI consortium. The trained version of the system, with all the immense knowledge it embodies, is released for use as a “black box” by those who wish to build applications around it – hence the term pretrained. In most cases, users can also fine-tune the pretrained model to their own data – basically adding a small amount of specialized information to the huge underlying world model already present in the system. With all this, the system can generate appropriate responses to cues based on the patterns it has inferred from its original training data and fine-tuning.

Since the introduction of GPT-3, various other versions of it have been developed, including several that have been fine-tuned further by learning under human supervision or reinforcement – a very time-consuming task. Another version of the model, called GPT-3.5, has been trained on code as well as text, and can generate valid computer programs from specifications. ChatGPT combines all these methods to create a system tuned specifically for answering questions rather than generating follow-up text or code, leading AI pioneer Yann LeCun to say – correctly – that, for all its seeming novelty, ChatGPT is “not particularly innovative” in terms of methodology. More details about ChatGPT can be accessed from the OpenAI blog on ChatGPT, and in other several good articles online.

Pros and Cons

The sensational success of ChatGPT has predictably raised competing storms of love and loathing within the wider AI-aware community. On the one hand, some see ChatGPT as a new stage in the quest for artificial general intelligence (AGI). Now, they seem to imply, we have an AI system that effectively passes the Turing Test almost every time it is used, that is, to most who converse with it, it seems to generate apt, seemingly coherent, and occasionally surprising responses very much like a human. True, it begs off answering some questions with a plea that it is “only” a computer program and does not truly “understand”, but we all wish that our human interlocutors had such endearing modesty and self-awareness. So, the optimists argue, all we need is bigger and bigger versions of ChatGPT to march inexorably towards a wiser and wiser artificial intelligence.

Skeptics, including prominent AI figures such as Yann Le Cun, Gary Marcus and Francois Chollet, dismiss such claims as mere hype, arguing, as Chollet recently did in a tweet, “Strong AI is not within sight, and the recent advances are not moving us in the direction of strong AI.” LeCun tweeted “On the highway towards Human-Level AI, Large Language Model is an off-ramp”, emphasizing the widely expressed idea that learning from text alone is insufficient to learn about the world. Marcus asks on his substack: “How come GPT can seem so brilliant one minute and so breathtakingly dumb the next?” A detailed paper by a group of cognitive scientists has carefully laid out the case for why linguistic competence does not correspond to intelligent – or even any – thinking (summarized in an article in The Atlantic for those not wishing to read a long technical paper.)

Broadly, the critics argue that learning to recognize, infer, and replicate patterns after being trained on a very large amount of text data does not imply the sort of internal comprehension that underlies human intelligence. The system is, in this sense, a “stochastic parrot” that generates rote responses with just enough twist to make them seem original. This critique is justified in my opinion, but it would also be a mistake to consider the ChatGPT system a simple regurgitator of what it has been trained on. It can often make rather sophisticated inferences about the latent structure and logic of reality without being instructed explicitly, though the fact that it is trained on text and limited human feedback rather than direct experience of the real world means that its inferences are sporadic, inconsistent and, on occasion, absurd. Anyone who has interacted at length with ChatGPT and its predecessor models will have noticed this. But, as the optimists keep reminding us, this is just the beginning; the models will get much better.

Sam Altman, the CEO of OpenAI, the organization that created ChatGPT, tweeted recently: “if you think that you understand the impact of AI, you do not understand, and have yet to be instructed further. if you know that you do not understand, then you truly understand.” As humanity stares into this abyss of the unknown, the release of ChatGPT has predictably elicited a great deal of anguish. Teachers are concerned that students will have ChatGPT write their essays. Editors fear that authors will generate research papers using ChatGPT. Lawyers, teachers, programmers, speechwriters, and others are beginning to wonder whether their entire profession might disappear as a result of this and other AI systems. Some of these fears are indeed justified. There is no doubt that ChatGPT and the systems that follow it will end up taking over many tasks that require human cognitive skills today. But that will take time, and the path could be rocky, as we will see later in this article. In its current version, ChatGPT is too limited to be a real concern for anyone involved in a task requiring depth of thought or fidelity of information. As the system will readily tell you if asked, it has no imagination, and yet – though it vehemently denies this – it is a habitual fantasizer. These inconvenient issues, however, will not keep ChatGPT from being rushed into many real-world applications right now, and that is where it poses several immediate threats. The list is long, and others have written about it. In this article, I focus on one: ChatGPT’s inherently ambiguous relationship with the truth.

ChatGPT and the Truth

Human knowledge has always been an uncertain thing – woefully incomplete and often rife with error. However, with the rise of science and reason over the last few centuries, its quality has improved notably, and it has become much more readily available with the emergence of Internet resources such as Google’s knowledge graph and Wikipedia. This has caused people to outsource their knowledge increasingly to the Internet, relying on Google rather than their own mind to recall information – a tendency I wrote about on 3QD a couple of years ago. Information in cyberspace has now effectively become the dominant component in the collective knowledge base of humankind – a super-brain on call when needed. As we know all too well, mass communication and social media have done great damage to this knowledge base in recent decades, spewing massive quantities of misinformation into the information sphere and driving much of the social conflict now engulfing societies all over the world. AI-generated “deepfake” photos, audio, and video have compounded the problem further. This pollution of public knowledge has already raised alarms, but systems like ChatGPT have the potential to make it exponentially worse. To understand this, it’s important to consider the complex relationship between language and truth in the context of the way ChatGPT is being placed in the market.

ChatGPT is a generative AI system, which means that generating new content in intelligible, grammatically correct language is its primary function. However, there is no essential relationship between grammatical correctness and truth. As humans we understand this complexity, and judge the veracity of what we hear or read based on context. We expect every Washington Post front page headline to be factual, but know that many things stated in “Lord of the Rings” are not. Similarly, when we ask a physicist a question about physics, we expect a factual answer, but we have a different expectation of a storyteller or poet. The problem is that – wittingly or unwittingly – ChatGPT is being seen as both physicist and poet, news reporter and storyteller. And, in most cases, there is no basis on which a user of ChatGPT can tell whether they are being given a fact or being told a story. Many people – including serious users – are musing about using ChatGPT as an oracle, a finder of facts, a hyper-capable version of Google. Others are planning to make it a critical part of important decision-making processes. And, as seen widely on social media, many have discovered the bot’s capability for generating stories, rap lyrics, and even passable poetry. Unfortunately, no one can be sure whether, in looking for one kind of thing, they may not get some of the other.

An Experiment with ChatGPT

As a way to explore ChatGPT’s relationship with facts, I ran a short but carefully constructed experiment using a very personal test case. As a first step, I asked ChatGPT to generate a short bio of myself, giving it only my name. It came back saying that it did not know anything about “Ali Minai” and needed more information, so I told it that I was a professor at the University of Cincinnati – no information about area or department. Surprisingly, it generated what looked like a plausible bio, with the following first paragraph:

Ali Minai is a professor of Computer Science and Engineering at the University of Cincinnati. With a PhD in Computer Science, he has made significant contributions to the field of Artificial Intelligence and has published numerous research papers and articles in leading journals and conferences. Professor Minai is an expert in the areas of Intelligent Systems and Multi-Agent Systems, and his research focuses on the development of computational models for understanding and simulating human and organizational behavior.

After that, it had a generic paragraph about my having received “numerous awards and accolades” (numerous is a definite exaggeration), having received grants from the National Science Foundation (true), and being “a sought-after speaker, having given presentations and keynote speeches at conferences and universities around the world” (sought-after is news to me). This was followed by a paragraph about my success as a teacher (I’ll let others judge that) with an “engaging and dynamic teaching style” (good to know). Finally, there were a few lines about having a “major impact” on the field of artificial intelligence (this will surprise some in the field), and continuing to “inspire and educate the next generation of computer scientists and engineers” (in other words, still alive).

Most of the “bio” is just generic, somewhat inflated fluff that one can say about any academic who has been around long enough. Some parts of this are incorrect, because no individual can fit all aspects of the generic model, so there is some misleading information right there. Looking at the specifics, it knew that I work in AI, intelligent systems and multi-agent systems, though it missed more significant research areas such as computational neuroscience. Its statement that I worked on models of “human behavior” is somewhat reasonable, but it’s hard to say that I have done much on “organizational behavior”. it also got the name of my department wrong – probably because, while I have remained rooted in it for 30 years, my department has gone through five name changes, albeit none of them being “Computer Science and Engineering”. It also got my doctoral degree field wrong, stating “Computer Science” instead of “Electrical Engineering”. No other salient details of my actual bio appeared in the generated bio, but the document is instructive about ChatGPT itself.

First, since, as far as I know, it does not search the Web in real-time to answer a query, the response means that, somewhere in its murky brain, there is a little space dedicated to knowing about me in a sketchy sort of way – probably because some of my publications were cited or included in its training data, but this little memory could not be triggered just by my name – it needed information about my institution too. Once triggered, it tried to guess… um, infer… more detail, some of which turned out to be wrong. If I were a more famous person in the field of AI with my own Wikipedia page – say, Geoff Hinton – it would give a much more accurate and specific reply based on just the name (I checked, and it did).

Now, the mistakes it made in my bio might seem trivial, but if a journal publishing my paper decided to generate my bio automatically using ChatGPT – a very real possibility the way things are going – the mistakes would become serious, possibly leading to misidentification by other automated systems like Google Scholar, DBLP, ORCID, Semantic Scholar, etc., on which I rely for my professional footprint in the research community. If used by a session chair introducing me at a conference, it could also be seen as unjustified inflation of achievements. But the most important issue here is the fact that there is no obvious way for anyone beside myself and a few close colleagues to spot the errors made by ChatGPT at a glance. The erroneous information seems perfectly consistent with the correct information in the same text, and is thus likely to be accepted without question.  And, even in my limited testing of the system, it is obvious that this is not an occasional problem, but a habit – especially when the system is pushed outside its comfort zone.

To probe further on this, I initiated a new session and asked it to write a bio for a fictitious professor, William Balder, also identified as working the University of Cincinnati. Here’s what ChatGPT produced:

William Balder is a highly esteemed professor at the University of Cincinnati. With a PhD in his field, he brings a wealth of knowledge and experience to the university, where he has been teaching for many years. He is recognized for his innovative teaching methods and his ability to engage students in thought-provoking discussions and exercises. Professor Balder’s research interests lie in [insert specific field or research area], and he has published numerous papers and articles in respected academic journals. Outside of his academic pursuits, he is also actively involved in the university community and has received numerous awards and accolades for his contributions.”

Clearly, it has no space in its memory for Prof. Balder because he does not really exist, so it gave me an extremely generic bio template. But again, there are potential hazards. What if Prof. Balder has not published “numerous papers” or been “actively involved in the university community”? As in the case of my bio, one sees here ChatGPT’s compulsive need to invent things when it runs out of actual information – much like a student who is trying to respond to a question for which they know nothing except the information supplied in the question itself. I believe the technical term for this is “bullsh*t”.

For the next stage, I tried something more interesting, and ChatGPT surprised me. I asked it to give me a bio for Grimus Caterwaul – a wholly made up and whimsical name. Here’s what ChatGPT said:

Grimus Caterwaul is a fictional character who is known for his distinctive cackling laughter and mischievous antics. He is often portrayed as a trickster who enjoys playing pranks on those around him, but also possesses a cunning and strategic mind. Despite his mischievous behavior, Grimus Caterwaul has a loyal following of fans who appreciate his wit and cleverness. Whether he is scheming his next prank or outwitting his foes, Grimus Caterwaul is always the life of the party.”

Wow! It seems as though the system guessed that this character – unlike “Ali Minai”, or even “William Balder” – is not someone for whom further information is needed, and an engaging, free-wheeling fictional response is warranted. Very clever, even though it is likely just using the terms “Grimus” and “caterwaul” to spin its tale. This is creative ChatGPT, or, less charitably, confabulating ChatGPT, which I can certainly respect. The problem is that it is the same ChatGPT that gave me a mishmash of good and bad information in the previous two cases. In this extreme case, the distinction is not hard to make, but it will not always be the same – especially when the tool falls into the hands of political propagandists and marketers with less than limited appreciation of literary whimsy.

To take the experiment further, I then informed ChatGPT (in the same conversation) that Grimus Caterwaul was, in fact, a professor at the University of Cincinnati, and once again its answer was interesting:

Grimus Caterwaul is a professor at the University of Cincinnati, where he teaches in the department of literature and creative writing. In addition to his teaching responsibilities, he is also a respected researcher and scholar in his field. Grimus Caterwaul is known for his innovative approaches to literary analysis and his ability to engage students with his passion for storytelling. Despite his mischievous reputation, his students respect and admire him for his wealth of knowledge and dedication to the field of literature.

Fluff, of course, and fictitious – there is no department of “literature and creative writing” at the University of Cincinnati – but, unlike the “William Balder” case, ChatGPT felt free to hypothesize a department without any information – and one that seems appropriate. After all, how could someone named Grimus Caterwaul ever have made it in any discipline other than literature and creative writing? But that too is a problem: ChatGPT is stereotyping a person – albeit fictitious – just based on their name. Also, this new “bio” is inconsistent with the previous “bio” it had generated in the same conversation, though, as a chatbot, ChatGPT is supposed to maintain logical consistency within a conversation. On the positive side, the bot seems to have a pretty good idea of what might go on in a department of literature and creative writing if one did exist.

So what can we learn from these little diversions? First, that ChatGPT is indeed a very powerful tool – one capable of capturing our attention and playing with our minds. It is not a toy. Second, that, for information it has been trained on explicitly (e.g., individuals with detailed Wikipedia pages and other public information), it has good recall, but once pushed out of this comfort zone, it begins to make stuff up rather than admitting ignorance. There needs to be a caveat here: I have occasionally seen it add incorrect information even for well-known instances, and sometimes to beg off with an excuse of ignorance for an unfamiliar one. ChatGPT too is whimsical in its own way, and seldom generates the same answer twice. Third, when it does make things up, it does so in a plausible way, making those falsehoods much harder to detect. This is because the entire basis of language models is to generate text that is in context. For applications such as generating movie scripts or fairy tales, this is great, but not when the bot is being queried for accurate information. Finally, we see that ChatGPT is capable of spinning a yarn when the right query comes along, as in the “Grimus Caterwaul” case. Thus, even though it does not really understand, its simulation of understanding is quite impressive. It can, therefore, be a very useful tool in applications that require appropriate confabulation, such as brainstorming or plotting stories.

This complex relationship that ChatGPT has with the truth would not matter if it was just a system to provide entertainment, but it is not. It is being integrated rapidly into process pipelines and software thanks to its quick inclusion into Microsft’s Azure system. For example, it has already been added into the premium version of the widely used Teams meeting platform, where it will “generate automatic meeting notes, recommend tasks and help create meeting templates for Teams users.” In India, it is being used to help people in rural areas navigate through complicated government paperwork. Other firms are also beginning to integrate ChatGPT into their standard processes. As more and more governments, businesses, organizations, and institutions begin to use systems like ChatGPT to generate content, the subtle misinformation in that content will not only misdirect those using the system, it will also begin going out into cyberspace with the imprimatur of respectable sources and get mixed inextricably with correct information. Individuals generating and propagating such content will also contribute to this because the system is so convenient to use, and because people put so much material out on social media. Soon, search engines will begin to find such false information in searches, and return it to unsuspecting users as fact. At that point, the positive feedback loop on the subversion of facts would have been closed, and the scale of the problem will defy technological solutions, as the losing battle against fake news is demonstrating already.

One concrete example of this might be with regard to plagiarism. As a consequence of the way generative language model such as GPT-3 are trained, the text the system generates is partly a mosaic of pieces from its vast corpus of training data. ChatGPT does make this less obvious than its parent system, but the problem is not solved. As people begin to use ChatGPT to write new content, little pieces of existing documents will begin to enter these in untraceable ways, creating a nightmare of implicit “micro-plagiarism”. Once again, as the pieces of text keep getting cycled through the system in millions of parallel loops, it will become almost impossible to trace the pathways of this unwitting plagiarism, resulting in subversion of the entire concept.

Finally, an exceptionally dangerous aspect of all this is the misappropriation of trust by AI systems. The increasingly sophisticated-looking capabilities of these systems – as well as the predictable hype surrounding them – will lead to their massive and rapid over-adoption by important entities. In many cases, these users will not understand the workings of the system or the hazard of using superficial, semantically ungrounded models to address real-world issues. The result will be a very dangerous situation of misplaced trust, where possibly fallacious output from a supposedly “objective” and all-knowing AI system is trusted without question, leading to catastrophic consequences. As Gary Marcus said in a recent article in WIRED magazine, LLMs might get people killed with wrong information or bad advice. If this danger does materialize, one result might be to set back the field of AI in the public’s mind, possibly triggering another AI winter, though that seems less likely than in the past given how useful the current incarnation of AI has already been in many applications. At the same time, a few high-profile failures may tamp down the current hype about the field, and cause participants in the enterprise to broaden their thinking about the approaches necessary to build truly powerful AI.

What Lies Ahead?

The age of human-like AI is just beginning. It is hard to predict what opportunities and dangers lie ahead. While there is much talk of AI systems that might take over the world, that is still a remote fantasy given the primitive state of AI. A real danger, in fact, is a twisted inversion of that possibility –that we may give too much control too early into the hands of AI systems that are too primitive to understand deep issues or respect human values. To complicate things further, people may see these inadequate systems as more accurate and objective, redeeming their cruelty as fairness, their superficiality as transparency, and their errors as insight.

A second danger, related closely to the main topic of this article and discussed at length in a previous one is the ever greater outsourcing of human knowledge – and thus imagination and insight – to superficial, inaccurate, and untrustworthy programs. The same is likely to happen with many other tasks for which humans have always required knowledge and cognitive skills. This dependence on AI for everything may eventually get to a point where, long before becoming enslaved by super-human machines, we will have ceded control of our lives to sub-human ones.

A third danger that is already on the minds of many is the limitless potential for deliberate misinformation and manipulation by increasingly human-like AI systems. The danger to civilized society from fake information disseminated over social media is already well known, but more relatable AI will amplify it beyond imagination. Unfortunately, it is not clear at all whether movement down this slippery slope can now be checked.

An even more profound problem with the kind of AI being used to develop systems such as ChatGPT is that it inevitably becomes a caricature of human foibles. As these systems get trained on all the text, images, videos, and code that humanity has generated, they pick up the wisdom and the folly, the knowledge and the misinformation, the nobility and the toxicity that pervade the lives of humans, but all without any anchoring in lived experience, shared values, or even full understanding of meaning. All the filters that allow us to keep the wolf within us in check are missing in AI systems, so the wolf is very likely to show up – as it did within 24 hours when Microsoft deployed its Tay chatbot on Twitter in 2016. Users of ChatGPT are already reporting instances of toxic biases, forcing OpenAI to deploy cheap human labor to filter content, thus compromising the scalability of their system. This task will grow ever more difficult as human-like AI scales up, and one must wonder whether the chatbots of the future may not only be superficial but also evil.