Open Letter Season: Large Language Models and the Perils of AI

by Fabio Tollon and Ann-Katrien Oimann

DALL·E 2 generated image

Getting a handle on the impacts of Large Language Models (LLMs) such as GPT-4 is difficult.  These LLMs have raised a variety of ethical and regulatory concerns: problems of bias in the data set, privacy concerns for the data that is trawled in order to create and train the model in the first place, the resources used to train the models, etc. These are well-worn issues, and have been discussed at great length, both by critics of these models and by those who have been developing them.

What makes the task of figuring out the impacts of these systems even more difficult is the hype that surrounds them. It is often difficult to sort fact from fiction, and if we don’t have a good idea of what these systems can and can’t do, then it becomes almost impossible to figure out how to use them responsibly. Importantly, in order to craft proper legislation at both national and international levels we need to be clear about the future harm these systems might cause and ground these harms in the actual potential that these systems have.

In the last few days this discourse has taken an interesting turn. The Future of Life Institute (FLI) published an open letter (which has been signed by thousands of people, including eminent AI researchers) calling for a 6-month moratorium on “Giant AI Experiments”. Specifically, the letter calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”. Quite the suggestion, given the rapid progress of these systems.

A few days after the FLI letter, another Open Letter was published, this time by researchers in Belgium (Nathalie A. Smuha, Mieke De Ketelaere, Mark Coeckelbergh, Pierre Dewitte and Yves Poullet). In the Belgian letter, the authors call for greater attention to the risk of emotional manipulation that chatbots, such as GPT-4, present (here they reference the tragic chatbot-incited suicide of a Belgian man). In the letter the authors outline some specific harms these systems bring about, advocate for more educational initiatives (including awareness campaigns to better inform people of the risks), a broader public debate, and urgent stronger legislative actions. Read more »

Akim Reinhardt’s Plagiarized 3QD Essay

by Akim Reinhardt and GPTChatbot

Computers in the 1950s - History-ComputerA couple months back, I wrote an open letter to fellow professors. Don’t panic, I said. Yes, new crops of student plagiarism bound are sprout in the fertile fields of free, online AI software. But it’ll be alright. Here’s how to adjust.

Then it occurred to me: Why should students have all the fun? I can now plagiarize just as easily as they can. So I decided to have the GPT Chatbot write my monthly 3QD column and see what we get.

First I established that the chatbot could identify 3QD, and me as one of its contributers. It thinks I’m a writer (yes), journalist (eh), and editor (not really). No mention of me being a historian, a professor, or man of dubious character. It claims I’ve written for the Washington Post and the Guardian; the truth is I’ve been interviewed by those news outlets, but never written for them.

Then I asked it to write a thousand word essay. Not counting its disclaimer, the GPT Chatbot pumped out just over half that (590 words). That essay follows. I graded it and have marked it up [my comments are red and in squared brackets, like so]. Let’s see how we did.

Prompt: Write a 1000 word essay in the style of 3 Quarks Daily writer Akim Reinhardt

Response: Note: As an AI language model, I can generate text in the style of Akim Reinhardt, but the content and opinions expressed in the essay are purely generated by AI and do not reflect the views of any human writer.

The Uncertain Future of Democracy in the United States

The United States of America has long been regarded as a beacon of democracy in the world, a shining example of a free and open society that values individual rights and liberties [By some; others have long criticized it for not living up to its vaunted values. Also, a punchier lede would help]. However, recent years have seen a troubling erosion of democratic norms and institutions in the country, raising questions about the future of democracy in the US. [Does it represent an absolute descent or a regression to the norm? In retrospect, 1968–2000 may have been a highpoint for U.S. democracy, preceded and now proceeded by numerous attempts, both legal and corrupt, to limit the franchise and thwart the will of the majority.] Read more »