Open Letter Season: Large Language Models and the Perils of AI

by Fabio Tollon and Ann-Katrien Oimann

DALL·E 2 generated image

Getting a handle on the impacts of Large Language Models (LLMs) such as GPT-4 is difficult.  These LLMs have raised a variety of ethical and regulatory concerns: problems of bias in the data set, privacy concerns for the data that is trawled in order to create and train the model in the first place, the resources used to train the models, etc. These are well-worn issues, and have been discussed at great length, both by critics of these models and by those who have been developing them.

What makes the task of figuring out the impacts of these systems even more difficult is the hype that surrounds them. It is often difficult to sort fact from fiction, and if we don’t have a good idea of what these systems can and can’t do, then it becomes almost impossible to figure out how to use them responsibly. Importantly, in order to craft proper legislation at both national and international levels we need to be clear about the future harm these systems might cause and ground these harms in the actual potential that these systems have.

In the last few days this discourse has taken an interesting turn. The Future of Life Institute (FLI) published an open letter (which has been signed by thousands of people, including eminent AI researchers) calling for a 6-month moratorium on “Giant AI Experiments”. Specifically, the letter calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”. Quite the suggestion, given the rapid progress of these systems.

A few days after the FLI letter, another Open Letter was published, this time by researchers in Belgium (Nathalie A. Smuha, Mieke De Ketelaere, Mark Coeckelbergh, Pierre Dewitte and Yves Poullet). In the Belgian letter, the authors call for greater attention to the risk of emotional manipulation that chatbots, such as GPT-4, present (here they reference the tragic chatbot-incited suicide of a Belgian man). In the letter the authors outline some specific harms these systems bring about, advocate for more educational initiatives (including awareness campaigns to better inform people of the risks), a broader public debate, and urgent stronger legislative actions.

The main point of both letters is that there are significant risks that come from AI-systems. However, the ways in which these risks are presented could not be more different. While the FLI indulges in unwarranted speculation and glosses over already existing governance mechanisms for reducing AI-caused harm. The Belgian letter, by contrast, is far more nuanced and grounds AI-risk in far more tractable terms.

Let’s take a look at what these risks are, according to the FLI:

Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

The questions they raise are no-brainers: Should we let machines flood our information channels with propaganda and untruth? No. Should we automate away all the jobs, including the fulfilling ones? No. Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? No. Should we risk loss of control of our civilization? No.

These are easy questions to answer, and reflecting on them does not necessarily get us any closer to addressing the impacts that these systems have. Moreover, the provocative nature of the claims means that we lose a degree of nuance in teasing out how currently existing technologies might get us to these improbable future scenarios, and what we might be able to do to avoid them.

In what follows we will go through two of these risks, namely, misinformation and labour market disruption. The reason for choosing these two are that we think they are the most plausible risks, at least in theory.

Essentially, our argument is that the FLI letter would encourage a greater centralization of power due to its hyperbolic claims. In the end we reflect on what the two letters suggest we do in light of these risks: and here again, we see the Belgian letter contextualizing the problem within existing regulatory mechanisms, and the FLI letter stripping away important context.

Let us turn to disinformation: “should we let machines flood our information channels with propaganda and untruth” seems to assume that distribution and creation of misinformation go together. However, creating disinformation is different to spreading it, and it is the distribution that is the really difficult part. LLMs have been around for a while and we are yet to see a case where these systems are used for the spreading of disinformation.

In contrast, the real risk here is due to our overreliance on these systems: LLMs are not created to generate truth, but humans still rely on these systems in cases where factual accuracy matters. And this is where the Belgian letter comes out on top: in this letter, the researchers go through the psychological mechanisms that are at play, and how and why people might come to trust these systems. They state:

“Most users realize rationally that the bot they are chatting with is not a person and has no real understanding, but that it is merely an algorithm that predicts the most plausible combination of words based on sophisticated data analysis. It is, however, in our human nature to react emotionally to realistic interactions, even without wanting it.”

This gets at what is really worrying in the case of disinformation: our propensity to trusts these systems inappropriately. We often attribute human characteristics to objects, and our brains seem to have difficulty grasping that they are not dealing with a human being.

The creators of (social and interactive) AI-systems must be careful with respect to our our tendency to ascribe human characteristics to non-human entities. Interacting with interactive AI can offer benefits (for example, children with autism spectrum disorder who can improve their social skills) but can also be dangerous as the tragic case in Belgium demonstrates. In this sense, the call for careful research regarding the effects of human-AI interaction is important, as is the education of citizens to become aware of such tendencies. For example, there are calls for chatbots to be prohibited from using emojis due to the manipulative effects this may have.

What about the labour market? Here there are genuine concerns that people will lose their livelihood due to AI-systems. This concern might be justified, and we ought to be very careful about when and how we go about introducing these systems. However, the letter glosses over what has been arguably the greatest impact of these systems to date: the further centralization of power in the hands of a small tech elite.

While over time new technologies do stoke worries about job disruption, and indeed there might be a very real threat from LLMs, it seems hyperbolic to suggest that these systems will “automate away all the jobs, including the fulfilling ones.” What is certain is that most likely there will be a ‘shift’ in how certain jobs are performed, but this does not necessarily mean that all jobs will disappear. For example, it is indeed likely that the job of a lawyer or a journalist will drastically change, but that’s not necessarily a bad thing (since these occupations have changed over time in any case).

What is also worrisome is that in the letter they talk about “fulfilling jobs” that will disappear without giving an example, and on the other hand completely leave out the fact that many DDD (dull, dirty, and dangerous) jobs would, and are, also disappearing. Moreover, OpenAI were also found to be using data labelers in Kenya and paying them less than $2 per hour. Such outsourcing further centralizes the power that these companies have, as they can extract valuable labour for next to nothing.

From here, we can go into what suggestions both letters make with respect to these AI-risks that they have identified.

The FLI suggests that:

“AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.”

These policy suggestions seem quite reasonable. Yes, we need regulation, liability, and robust public funding. However, the framing in the letter seems to suggest that people have not been thinking (and warning!) about this issue for years already. For example, there is the now-famous “Stochastic Parrots” paper by Margaret Mitchell and Timnit Gebru from 2021 that warns against precisely what companies such as OpenAI were engaging in from the beginning: always aiming for larger and larger AI models. So, there are already people, such as Mitchell and Gebru, who have thought about these issues. However, in that paper, the authors explicitly state that their concern is not about ever more powerful AI-tools, but rather about concentration of power, reproducing systems of oppression, and damage to both our information and natural ecosystems. Moreover, the letter makes no reference to actual governance proposals that are already in place, such as the EU AI Act. There is even an entire Oxford Handbook of AI Governance. By stripping this context from the debates surrounding AI and safety, the letter plays into the hype surrounding AI, which is incredibly harmful and, again, distracts us from the impact it is having on our societies right now. The FLI’s letter fosters unhelpful alarm around hypothetical dangers. Instead, we should focus on actual, real-world concerns, such as bias and misinformation.

In comparison, the Belgian letter contextualizes the current discussion surrounding AI governance, and urges further strengthening of already existing ideas for future legislation:

“We can only hope that both member states and parliamentarians will strengthen the AI Act’s text during the ongoing negotiations and provide better protection against AI’s risks. A strong legislative framework will not stand in the way of innovation but can actually encourage AI developers to innovate within a framework of values that we cherish in democratic societies.”

Moreover, the researchers also highlight the need for awareness campaigns to educate the general public about the risks of these systems, which would aid in more democratic decision making with respect to how these systems are developed, deployed, and used.

The Belgian letter, to my mind, does a better job at articulating this concern, and gets to what is worth worrying about with respect to these systems.

Here, therefore, we echo the recommendations of Professor Emily Bender:

“Don’t waste your time on the fantasies of the techbros saying ‘Oh noes, we’re building something TOO powerful.’ Listen instead to those who are studying how corporations (and governments) are using technology (and the narratives of ‘AI’) to concentrate and wield power.”