by Claire Chambers
Not long ago I wrote for 3 Quarks Daily about R. K. Narayan’s The Vendor of Sweets. In Narayan’s 1962 novel, a young man comes home to India from studies in the United States of America with an apparently preposterous ambition. Now back in South Asia, he wants to manufacture ‘story-writing machines’ like those that, in this text, already exist in the USA. His father Jagan is horrified; to him, authorship is sacred and human. Furthermore, to this ageing Gandhian, the writing machines sound like just another instance of American ‘Coca-colonization’. Jagan contrasts the machines’ fluent but bloodless writing with what he genders (and ages) as the ‘granny’ oral storyteller of Indian village life.
Yet here we are in 2026, and this machine isn’t just a twentieth-century satire – it is fast becoming the very air that we breathe in our research and our imaginative endeavours. Being disembodied it may not be able to replace oral storytellers easily, but many other kinds of writers, artists, and creatives are worried about robots taking over their jobs. Generative artificial intelligence entails similar questions about tradition, modernity, global north and global south, colonialism and gender to those Narayan was pondering more than sixty years ago. I will consider all of these questions in this blog post and the sequel(s) which I envision, although I won’t head in a linear trajectory but will instead zigzag. From the outset it needs to be said I am a conflicted doubter in all this, and thus haven’t been able to resolve the conflict. What single individual could?
Anyway, I’ve been following the heated debate on generative AI, and I find myself not only conflicted but in two minds – a for and against I’ll try to sketch in these linked posts. Neither am I sold on the hype, nor am I rejecting the technology out of hand. But I’m fascinated and alarmed by what AI is doing to us. By ‘us’, I mean people who are creative writers, arts and humanities professionals, or simply trying to survive on a combative, burning planet. In other words, what is the impact of the story-writing machines, and how can human beings adapt or resist?
In his essay about conscious AI, Anil Seth declares: ‘The language that we […] use matters’. On the one hand, a colleague not long ago told me confidently that ‘all LLM AI is theft’. He isn’t wrong, though I would dispute that word ‘all’. These models scrape – a visceral, but highly evocative word – our books, our archives, and our labour. Others express this in an agricultural and euphemistic image as a ‘harvesting’ of intellectual property (IP). Several major class-action lawsuits are going on where large groups of authors are disputing the use of their work without consent.
One of the most egregious examples of such creative and intellectual colonialism that makes me understand my colleague’s opposition is the scraping of PhD theses (and other outputs) in institutional repositories. In many countries, doctoral work must go online unless there is a digital embargo, and the embargo is rarely granted for more than a few years. On paper, this looks like a positive sharing of findings as part of the open research movement. As such, universities have been slow to protect against the predatory behaviour of large language models (LLMs). Requirements that theses be made openly accessible after minimal or non-existent embargoes risk turning years of precarious, unremunerated early-career labour into freely mined training data. Licensing frameworks gesture towards protection, but in practice such limits are easily bypassed. This leaves the work of recent graduates even more exposed within a technological and pedagogic economy already heavily weighted against them.
But then I read my friend the Pakistani novelist Bina Shah’s ‘AI and Educational Elitism’, and I want to be as even-handed as she is in this essay. For my students from South Asia, and/or those for whom English is an additional language, AI often functions as a leveller. It can act as a supercharged dictionary, thesaurus, or editorial assistant. Where should we as educators draw the line on usage? These aids have long existed in paper or human form, available to those who have the money to pay for them. This millennium, with the rise of such applications as Google Translate and Grammarly, or functions like predictive text in SMS and emails, the border between human and machine has become even fuzzier. A shouting match about tradition and progress is raging between the zero-tolerance naysayers and the techno-utopians. This often drowns out less strident voices that can simultaneously speak up for and criticize both sides. I say: A plague on, and then a cure to, both your houses! I think of one of my favourite quotes from Salman Rushdie: ‘What is the opposite of faith? Not disbelief. Too final, certain, closed. Itself a kind of belief. Doubt’. Doubt is the right territory to occupy on this new frontline.
Then there is the hope, which Bina quotes Stephen Lyon as articulating, that AI is giving candidates a confidence that the traditional academy has often crushed or gatekept. Such levelling of the pedagogical playing field is promising and important. In an AI-driven future, people with excellent ideas wouldn’t fall foul of the grammar police or citational pedants (for both of which divisions, I’m a reserve member). Yet, teachers have to promote differentiation strategies for those language-learning students who could really benefit from AI. Young people need to know there should always be a human in charge.
My analogy for the chatbot would be that you’ve been assigned a cocksure young male employee (definitely a man, I don’t make the rules). This guy is in his twenties, and he’s had the benefit of a world-class education. His schooling was at Eton and then he did his degree at Oxford, before finally going off to Cambridge for a master’s – or insert your country’s equivalents here. He speaks all the languages, has equal knowledge of and skills in the arts, humanities, social sciences, and STEM. Frankly, you hate him. He’s young Boris Johnson (but more talented), so why would you not. He has zero imposter syndrome and is quite lazy, so although he’s smart and impeccably trained, he’s an awful risk-taker. Additionally, he’ll never admit he doesn’t know something, he’s very good at making stuff up, and he won’t put in the hours. He tells lies to get himself out of trouble, having this monumental but unearned arrogance despite the fact that underneath it all he has only a shaky command of any facts that get a little opaque. So, you need to line-manage the hell out of him. He’s a useful person to have on the team, to bounce ideas off, request advice from, and get to do tasks. The more specific and checking-oriented these jobs you give him are, and the less proactive and discursive you ask him to be, the more swimmingly everything will go. For example, ask the intern to point out your errors in the form of a list, and don’t be afraid to overrule anything you disagree with. Above all, for goodness sake – I use that phrase advisedly – don’t put him in charge. Check everything, trust nothing, and edit, edit, edit. Just remember Johnson’s cack-handed midwifing of Brexit: general disaster will ensue if the bot is the boss. (BoJo LOVES ChatGPT, by the way, mostly because it flatters him and allows him to be lazier still.) Finally, remember this is just a metaphor; a machine is not human and does not have even the projected low level of sentience and consciousness.
AI offers real opportunities. It can help surmount language barriers and give writers working in a second or additional language an authorial self-belief the academy has too often denied. It also speeds up menial tasks. For instance, AI can take the pain out of form-filling, freeing people up for more inventive work than bureaucracy. (That said, I want to festoon this tip with the caveats outlined in the previous paragraph. Plus, it doesn’t jive with all administrative work, and make sure you anonymize everything and everyone.) More existentially, LLMs can help with writer’s block, making users feel less alone when the world seems as if it’s narrowing and becoming ever more viciously self-serving.
We mustn’t be oblivious to the geopolitical realities shaping who benefits from AI. What often gets called the global south (an imperfect label) struggles with uneven access, limited participation, and the political power and profit derived from the location of data centres and servers. With shrinking pools of international funding for development and education, there is the danger of a two-tier system emerging. To simplify a complicated picture, much of the world’s data labelling, linguistic labour, content moderation, and low-paid digital piecework comes from less-resourced regions. Meanwhile the infrastructure, capital, and platform ownership are concentrated in a small number of wealthy states and corporations. (There are exceptions, of course. I can see that India, for example, is poised to capitalize from as well as be exploited by the AI revolution just as it did two generations before, during the dot-com and call-centre era.) Through asymmetries, existing northern biases in peer review and institutional access are reinforced.
For the generation between the dot-com revolution of the turn of the century and public use of generative AI from November 2022 onwards came an explosion of popularity in and addiction to those social networks which became known as social media. That historical moment – which is ongoing – contains stark lessons for what is happening today. For instance, last year the New Zealand public policy expert and lawyer Sarah Wynn-Williams published her memoir Careless People. After the earthquakes in her native Christchurch, Wynn-Williams thought the new communication technologies were proving to be uplifting, empowering, and a new kind of public square, so in 2011 she accepted a position at Facebook. Yet, as indicated by her subtitle, ‘A Cautionary Tale of Power, Greed, and Lost Idealism’, she was horrified by the work culture, toxic content, and censorship that she saw in Mark Zuckerberg’s company, now called Meta.
Similarly, in an article about social media and the Arab uprisings in Omar Robert Hamilton’s novel The City Always Wins, I wrote some years ago about how Facebook activism helped catalyse the 2011 protests at Cairo’s Tahrir Square. Lately a byword for Boomer uncoolness, at that time the digital platform’s symbolic power was such that, in the revolutionary fervour of February 2011, one Egyptian family reportedly named their new daughter after it. Swiftly, those same platforms were turned into instruments of repression. Among other nefarious acts, security forces in Egypt used open profiles and pages on Facebook and Twitter to identify and arrest organizers.
And look at what social media has become since 2011. On the credit side is Nepal’s Gen Z uprising, in which the elite’s tone-deaf posting and the government’s banning of social media made the students riot. This draconian move was widely read by young people as an attempt to quell dissent. What began as anger over digital censorship quickly expanded into broader demonstrations against political control, corruption, and shrinking democratic space in scenes that recalled the Egyptian Revolution. Similarly, in Bangladesh, youth-led unrest expanded beyond its initial trigger and was amplified by videos circulating online, though it centred more on inequality and state repression than digital repression alone.
On the debit side is how rampantly consumerist and extractive these platforms are these days. Built as they are on the monetization of attention and the harvesting of personal data, nowadays they promote cruelty as readily as connection. 2020s’ social media fuels harassment, spam, scams, clickbait, (self-)hatred, and despair. Worse, this coalesces into algorithmically-led echo chambers that inch ever closer to realizing the misogynistic, racist, and authoritarian fantasies nurtured by X, formerly Twitter. Consequently, I have almost entirely noped out of the distracting and damaging environs of X and the various Meta platforms by now, and I may well do the same with AI in the future. That is my privilege, as a member of Generation X who is a digital immigrant in a position of power at a global north university. Others don’t necessarily have this choice.
Generative AI’s chatty timbre and semblance of loyal support or even friendship shouldn’t deceive us. If LLMs are to be a force for inclusion rather than a tech-savvy mode of exclusion, we must redesign the whole ecosystem of knowledge production – policies, funding, review practices, and infrastructure – not rearrange deckchairs on the Titanic. We need clearer ownership of words, images, and ideas. There should be limits on how they are taken and reused, and credits and remuneration to the makers. Money matters. Funding mustn’t only deluge the same powerful places, but ought to flow in support of scholars, students, and institutions who have long worked with fewer resources – for sure, trickle-down economics won’t work. As for labour, the invisible work of tagging, translating, correcting, and moderating would also be paid sufficiently and attributed openly, not reduced to a ghost in the machine. For another thing, both academic and trade publishing must change. Reviewers and editors should come from a greater number of places. Welcoming a kaleidoscope of languages and a range of perspectives from early-career academics and novelists would result in a more textured intellectual environment. Only then might AI become a shared, equal resource, rather than the next chapter in the long story of extractive colonialism.
Another article, written by Ronald Purser, bears the Doomsday headline ‘AI is Destroying the University and Learning Itself’. It predominantly focuses on the neoliberal university, and on the ‘twisted ouroboros’ of marking or grading, where indeed we are living in a blurry reality. A little while ago, I felt as if I was losing my mind while annotating a postgraduate student’s writing submitted for a supervision meeting. I saw what I interpreted as a few glaring AI ‘tells’ – for instance, the overuse of ‘dynamics’, ‘delve’, and ‘intricacies’ – and I briefly suspected the work had been machine-generated. But in conversation it was clear she had no clue how to use AI; the work was genuinely hers. Even Seth’s prize-winning essay includes the line: ‘The more you delve into the intricacies of the biological brain, the more you realize how rich and dynamic it is’. That sentence would set my spidey-sense tingling if I encountered it in student work, but I’ve little reason to doubt Seth wrote it. We have always had verbal tics; humans, not only chatbots, favour certain turns of phrase. More troubling is that we are beginning to absorb the machine’s cadence even when we are not using it.
Then there’s the embarrassment of ‘hallucinations’. First, let’s consider the term itself since, as John Lanchester reminds us in a recent London Review of Books piece on what he calls machine learning, ‘only sentient beings can hallucinate. AIs aren’t sentient, and can’t hallucinate, any more than a fridge or a toaster can’. Seth, too, abhors this terminology: ‘When we say that AI systems “hallucinate,” we implicitly confer on them a capacity for experience’. On this point, a second postgraduate handed in what appeared to be a brilliant first draft. Yet it refracted familiar works into distortion, as though seen in a funhouse mirror. When I started chasing up the references, I found many of them were wish-fulfilment statements, made up by AI to please the candidate but not written by the real academics to whom these quotes had been attributed. In truth, it’s we markers who feel like we’re hallucinating! As Purser’s subheading has it, ‘Students use AI to write papers, professors use AI to grade them, degrees become meaningless, and tech companies make fortunes. Welcome to the death of higher education’.
In another instance from my teaching, I marked two first-year undergraduate commentaries on Robert Louis Stevenson’s The Ebb-Tide. Both were crisply, eerily fluent. But both contained a massive howler, namely that they repeatedly claimed the passage was from Treasure Island. These first years hadn’t read either text. They had just prompted a machine. Being that indolent intern, it had hallucinated or, more truthfully, produced slop – another disgustingly apt word.
At times, AI feels like a benign performance-enhancing drug, checking and refining work; at others, it feels like we’re in the Matrix, being duped by an empty, supposedly perfect prose that lacks the human error or idiosyncrasy that gives it the spark of life. This not only produces that uncanny valley tone of much AI writing but is also creating a crisis in trust. In especially fraught university contexts, it is leading to a state of red- and blue-pilled suspicion between faculty and students. Eroding the unwritten academic contract, AI often damages the rapport of staff and learners, dividing college communities into adversarial tribes of us and them.
I couldn’t pursue or prove AI use over the Stevenson faux pas by my two errant class members. The powers-that-be recommended I mark their essays as though they were genuine. Because artificial intelligence is, well, intelligent, I awarded them both the same low, third-class pass. Despite the big error in recognising the correct set texts, ChatGPT had done a competent close reading of the excerpt it had been presented with. In their feedback, each tutee was advised to see me to discuss their mark. To nobody’s surprise, neither ‘author’ came to discuss ‘their’ work. Adoption of AI is accelerating faster than policymakers can respond, leaving universities and publishers struggling to keep pace with effective detection tools.
I’m aware that, as is often the case in academia, so far I have been identifying patterns or pointing out problems more than suggesting solutions. In the next post, I’ll consider the issue of originality further, within a framework conceived by cultural theory drawn from my discipline of literary studies. I will also identify some positive steps universities, regulators, and scholars could take. Without these, as I’ll show, the story-writing machines may hollow out the very human traits the arts and humanities nourish. What are these traits? You’ll have to wait until next time!
