AI Part 2: Fakery, Friction, and Flaws

by Claire Chambers

My previous 3QD column ‘AI Part 1: What the Story-Writing Machines Are Doing to Us’ was an edited version of a talk on AI I gave at an interdisciplinary webinar for the publisher Taylor & Francis. At this real-world event I came across as a Luddite because STEM colleagues attending from around the globe were singing from similarly evangelical hymn sheets about the new technologies.

Their excitement is understandable given that AI is accelerating discovery, automating scientific drudge work, and expanding the scale of research questions. I felt out of step even though I​ am far from being a strident techno-pessimist​ and, as I’ll show, know my way round LLMs. Yet the picture in the arts and humanities is a lot darker than the mostly rosy glow these technologies tend to be bathed in for scientists.

I am sure many students, using fine-grained prompts and taking a long time over their essay-based assessments, are achieving higher marks with an almost equal lack of originality to the two Treasure Island AI plagiarists I discussed in my previous column. Universities are scrambling to determine how to respond to AI use in assignments. If LLMs are sometimes levelling the playing field for non-native speakers to get greater access in arts and humanities institutions, they are also having a more pernicious kind of levelling effect. This one doesn’t just impact the admissions process but attainment and feelings of accomplishment further down the line. People might be getting in, but they’re not getting on. Exams are coming back, and viva voce defences at the end of a doctoral degree would be well-nigh impossible for ChatGPT, although never say never.

At one end of the grading spectrum, machine learning makes failure less likely, mitigating struggling candidates’ despair. At the other end, though, the secret knowledge that AI was heavily leant on to write arts and humanities papers must dull a student’s thrill (as well as their neural activity) at receiving a high mark. Cheaters don’t get the dopamine hit of cracking a difficult problem on their own. And their activities are unfair to classmates who are following academic integrity rules but struggling with grades. No wonder some Gen Z students are exhibiting flat affect. It was predictable, too, that grades would start bunching into a narrow continuum. Even the most brilliant writing is now sometimes viewed askance by overworked lecturers primed to see AI in every sentence, so it may not receive the stellar percentage this work would once have gleaned.

Information overload, social media reels, and over-reliance on AI are damaging attention spans as well as emotions, accelerating an increasing problem with concentration. Entering the kind of ‘flow state’ of being ‘in the zone’ or ‘locked in’ that young people crave becomes harder still. AI streamlines tasks like drafting, recollection, and synthesizing that once required sustained mental effort. Such reliance encourages a habit of cognitive offloading in which persistence and slow reasoning are displaced by speed and convenience. Over time, this shift risks diminishing the deep attention, imaginative independence, and tolerance for complexity (and feeling bored), on which learning and critical thought depend. Diamonds get produced under pressure, so it’s unsurprising to hear calls for a return to friction.

In higher education, we need a vision that doesn’t just detect and arbitrate around AI. It should first identify and then champion what knowledge, skills, and creativity entail and what we most value in them. As a literary specialist, I think of Julia Kristeva’s notion of intertextuality or Mohsin Hamid’s ideas about co-creation. Adapting T. S. Eliot, Hari Kunzru says: ‘writing borrows, but good writing steals’. Literary theory has long shown that there’s nothing new under the sun and that the very idea of originality is suspect. Poststructuralist deconstruction, for example, claims that texts speak in and through other texts, and that writing has always been a multiplayer game. Authorship is collaborative, involving a tangled web of writers, those literary forebears who influenced them, copyeditors, production editors, marketers. In a chapter from their forthcoming book Global Literature and the Digital, Tegan Schetrumpf, Aleks Wansbrough, and Om Prakash Dwivedi explore how AI is transforming and will continue to reshape global literature. Seen from this perspective, AI is just a new node in a vast and longstanding intertextual network. The 180-degree turns from some theorists have been dizzying. In the 1990s, these arcane thinkers lambasted grand narratives; now they clutch their pearls about AI producing alternative facts and scraping their work.

On this issue of originality, in my teaching unit we have a strange two-tier system. Within certain parameters, we allow PhD students who can afford it to pay copyeditors in so-called ‘meat space’ (the offline world) to ensure we aren’t just rewarding native speakers at doctoral level. They are also permitted to use generative AI, so long as they ‘make sure that [they] use it appropriately’. And yet the guidance says undergraduates aren’t supposed to participate in AI technologies whatsoever for their assignments. Despite such apparently zero-tolerance policies, a year ago the Guardian reported that 92% of students in Britain use AI. Digital plagiarism is nearly impossible to prove, unless an essay explicitly states, ‘Created by an AI language model’, fake sources crop up, or a student confesses. How do we manage hierarchical systems (BA versus PhD) when the tools are the same and many students are using them for assignments? We all exist in the same confusing new frontier and should want to find ways out – or at least to make it inhabitable and inclusive.

There is a certain ‘fogeyism’ from many in academia. This does not necessarily stem from older professors but occasionally too the 20-year-old undergraduate who hates AI or an early career researcher who didn’t write their own thesis that way. Rejectionism manifests as a refusal to even try the tools, or a complete withdrawal after experimenting with them. University people turning away doesn’t help the situation. Like the arrival of the internet or the scientific calculator before that, AI is here to stay. The genie is out of its bottle and Pandora has opened her box. I am old enough to remember when the World Wide Web was considered by many to mark the end of days, or at least the end of research, and internet sources were discouraged in bibliographies. Then as now, many younger people are excited by the new technologies and, broadly speaking, lead its use. As they become the researchers of the future, AI’s presence will soon be the new normal. The refuseniks have some good points but, in my view, they need to see what the fuss is about.

As I discussed in the previous post’s ‘know-it-all’ Eton boy analogy, AI can be a supportive point of departure for a writer, but unless serious debugging occurs it cannot do arts and humanities research on its own. Students shouldn’t doubt their abilities and use ChatGPT for everything unquestioningly, because they are still more capable than the machine. Perhaps we should regard AI similarly to Wikipedia. Like the wiki rabbit hole, AI has a strong aspect of play as well as pedagogy, and this ludic quality helps to explain both applications meteoric popularity and potential as a route to learning. The online encyclopaedia is frowned upon as a source, but if academics are honest such sites, and the non-linear hypertextual journey they encourage, are often a great place to start an investigation. AI, too, has utility for inventorying typographical, grammatical, or factual errors in our own work. It can check citation lists for missing or extraneous references – as the bibliographic management tool EndNote has been doing since 1988 but in clunkier ways. Those who find argumentative transitions tricky could ask ChatGPT to write a non-repetitive, forward-pointing sentence to help bridge two paragraphs that previously seemed like a handbrake swerve. The bland sentence it spits out would then need hard editing or a complete rewrite, but its mere existence might help an author to sharpen the angle of their throughline. Finally, while I don’t think asking AI to sketch a skeleton structure for an essay is diabolical, I do wish candidates had the confidence to come up with their own scaffolding. Structuring advice can cause dumbing down, and the machine’s imagination still falls short of the human’s.

In all these examples, the informative, proof-reading, bibliographic, bridging, or architectonic work needs to be triple-checked and taken with a handful of salt. We all know AI makes shit up. It also sycophantically mirrors what you want to hear. Its writing, while usually logical and pleasant enough, is sanitized and hyper-legible. This reads at best as unmemorable and at worst as insufferably formulaic and grating. While the use of specific words like ‘intricacies’, ‘delve’, and ‘capture’ might raise a marker’s suspicions, diction tells us little in isolation. Together with an uncanny tone, generalizations, and overly broad topics, plagiarism starts to be signalled. Clusters of particular words and feedback loops of similar ideas repeated ad infinitum are dead giveaways. Unmediated AI writing lacks the detail, wit, and passion of the finest scholarship. As Jamie Bartlett commands in the conclusion to his BBC Radio 4 podcast ‘Everything Is Fake and Nobody Cares’: ‘Stop mistaking coherence for truth!’ AI can produce prose that sounds fluid, credible, and well-argued while lacking the lived experience, expertise, and personality that make nonfiction absorbing.

At this juncture, it occurs to me that the only chatbot I have mentioned in these linked blog posts is the first and most famous one, OpenAI’s ChatGPT. Other robot overlords are available, including mainstream ones like Google Gemini and Claude by Anthropic. An engineer told me that he and his colleagues experience intrusive existential anxieties about Claude. They see Anthropic’s AI as the most serious contender, while regarding OpenAI’s ChatGPT as over-hyped marketing by this stage of the tech’s development. Pakistani literature academic Mushtaq Bilal has made a career out of recommending LLMs like Perplexity Pro for research, referencing, and academic writing. I remain sceptical of much of this. I have read a few worst-case scenario books like If Anyone Builds It, Everyone Dies. I’m aware the agentic forms of AI made by other AIs rather than humans are a historic first and are causing terror about where this will take us.

I still think we don’t need AI to destroy civilization. We’re doing a great job of that ourselves.

I’m not a neo-luddite. We have two sons, a nineteen-year-old chemical engineering student and a budding physicist in his early twenties, and they have kept me on my toes with tech, dragging me reluctantly into the 2020s. I’ve done more than dabble with AI; I was an early adopter and first messed around with it in December 2022, being the person to introduce a new application to our family for once.

As something of a self-styled expert, then, let me provide a sidebar on two distinctive applications that are a little less well known than some of the others. The first is Google NotebookLM, which you upload a PDF to. Out of this, it creates two-‘person’ podcasts, featuring the voices of a cheerful pair of presenters, one bot styled as male and the other female-coded. Although flawed because it occasionally produces recursive loops of the information or can even be hallucinatory, NotebookLM is engaging and often proves helpful for unlocking your own ideas. Use it sparingly, as I’m sure it’s thirstier than a Kardashian-Jenner. (I’ll circle back to the urgent topic of AI’s water consumption shortly.) The second, ElevenReader from ElevenLabs, also encourages you to feed it PDFs, out of which come extraordinarily human-sounding audiobooks. This application currently has a decent cost-free version and is ideal for listening to and proofreading your own work. (Full disclosure: I turned my paired blog posts into a single voice rendering and listened to it a few times while out and about. This was brilliant for editing and for spotting holes in the argument.) Both applications are also likely to experience enshittification in the future just as (in my opinion) a comparable app NaturalReader has become unusable. However, for now ElevenReader is so good that newspapers like the New York Times and Atlantic convert their articles into audio narration under its auspices. (Let’s not debate concerns about robots taking the jobs of voice actors here.)

These aural platforms implicitly raise the important issue of differently abled learners. One of the reasons I resist unfettered AI Doomsdayism is that it often comes from a position of able-bodied privilege. True, some detractors are battling US academia’s toxic corporate environment and swingeing cuts, where AI is being imposed on them with no consultation in a way that makes knee-jerk opposition understandable. But the naysayers fail to consider the empowering aspects of AI for blind students, just to take a single disability as an example. For a partially sighted learner, podcast and audiobook generators can offer a lifeline. Theories of narrative prosthesis reinterpret dependence on external supports as universal rather than exceptional, from glasses to language itself. Normative embodiment is only ever temporary – or an illusion.

None of the problems are the fault of AI as such. If the technology seems to have independent power, remember it’s humans who manipulate it, in the guise of tech companies, platforms, advertisers, politicians, interest groups, unaccountable and unhinged tech bro billionaires. I don’t want to be misconstrued as levelling neoliberal blame at the little people, in the way organizations like the National Rifle Association of America (NRA) point the finger at individual gun owners. We shouldn’t vilify students for using AI, although hallucinations or – more accurately – ‘confabulations’ in written work are unacceptable, so we may need to chide them for using it ineptly.

True, in digital contexts anonymity plus virality makes the spread between ordinary users of conspiracies and disinformation much too well-oiled. But responsibility for this lies with profit-driven designers, executives, and policymakers who chase riveted eyeballs over safety. What is needed is lobbying and receptive leadership to bring in enforceability. Into the current digital Wild West would ride protective third-party audits, clear platform liability, stronger data and copyright preservation, and funding for moderation and civil-society monitors. Everyone has been sounding warnings about the new AI applications, but where are the guardrails? The many of us who have serious concerns need to push for rules, laws, and public oversight. These would create carrots and sticks to secure the rights of society’s most vulnerable members.

Till now I haven’t explored a major concern that torments many doubters. As someone focused on South Asia who has recently written a book that has climate change as a central strand, I can’t ignore the environmental cost. We talk about the ‘cloud’, a refreshingly natural image. But that cloud is made of massive data centres that allegedly consume billions of gallons of water to cool the server farms – though amounts are contested and information obscured. This is water that is increasingly scarce in the very regions I study. It matters most for the country where my heart is. Pakistan has become ‘one of the most water-stressed countries in the world’ this century. The nation’s predicament inspired two works in 2021 alone: Daanish Mustafa’s book Contested Waters: Subnational Scale Water and Conflict in Pakistan and a film about Karachi’s water crisis, Into Dust. The techno-idealist dream of an ‘inclusive tool’ risks being turned into a dark future by the sheer environmental burden AI exerts. It is another form of extraction: even if AI helps a Pakistani get into a PhD programme, first their data from any thesis they eventually write will be grabbed, and then the water the nation relies on for survival.

What’s more, I wrote the first draft of this blog post in the week that gender and AI were thrust into the spotlight with all the discussion of image abuse and the debased use of Grok to digitally strip women’s bodies without their consent. There is a dystopian side to all this that is hard to unsee. The tech bros shun accountability and shrug off transparency. Once again in history, the future is being built on women’s humiliation while its architects call it innovation.

AI isn’t just about tech. At its core and unavoidably, it’s a social phenomenon. This is most obvious when it comes to affective AI like care robots, grief bots, and AI girlfriends (about which issue and others I wrote an article). It also applies to more straightforwardly generative AI. Whether we approach it through psychology, politics, language, or literature, it is the humanities and social sciences that help us interpret what AI is doing to us, and to carve out new spaces beyond it. I hope these blog posts demonstrate the humanities’ catalytic role within the interdisciplinary study of AI. Far from serving as a decorative or instrumental plug-in, this field treats cultural and literary expertise as indispensable – placing it at the centre of a more expansive, integrated response to technological change.

I want to emphasize literature’s resistance to easy slogans and tidy solutions when it comes to AI. In the arts and the tellingly named humanities, we need human flaws. The beauty of a voice – like those of singers Bob Dylan or Atif Aslam – but also literary voices like Hanif Kureishi’s or Chimamanda Ngozi Adichie’s – comes from its imperfections. Just look at how imperfect my previous sentence was! If we move toward a realm of pervasive AI-generated research, we lose that grit, scratchiness, howl, and honesty.

Art creates a place for emotions and ideas to live side by side. It doesn’t fix or erase but instead listens. It makes space for some messy facts of being alive like whimsy, eccentricity, grief, and treats them as stories to be heard rather than problems to be solved. In art, beauty and elegance don’t inhere in the finished object alone but arise from the act of making itself and from the creator with all their imperfections. As W. B. Yeats famously writes in ‘Among School Children’, ‘How can we know the dancer from the dance?’ This poetic line dissolves the membrane separating maker and made, suggesting that creation and creator are indistinguishable and embodied in a way even the most advanced AI could not emulate.

Literature, especially my first love, the novel, takes private pain or joy and spins it into new structures people can recognize. In the most captivating novels, readers feel they are there; they become the characters. Fiction traces the strangely familiar contours of a feeling, so the reader realizes, ‘It’s not just me’. In that recognition, there is relief. This doesn’t mean easy answers, but the scabrous comfort of company. To read fiction that has paid close attention to a thought, a mood, or what it is like to inhabit a body is to feel seen and understood. As the actor Andrew Garfield remarks on the power of storytelling and an essay that moved him to tears: ‘It’s mysterious because art can get us to places we can’t get to any other way’.

In the sciences at least, AI is facilitating some fresh developments and new heights in research. In the arts and humanities, it unblocks ideas. Within the creative industries, it can carry water for the writer, and urge them on. It has done some convincing mimicry and some decent stabs at original work. The story-writing machine has well and truly arrived. As this arrivant – some would say arriviste – continues bedding in, I’m advocating for a mixed-medium landscape. There, traditional skills and AI models would co-exist and take writing and reading to new heights – although the co-existence would be extremely wary at times. It would also be within clear limits. Because, above all, I strongly believe that scholars and writers still need to be in the driver’s seat, with AI at its best as a satnav.