Are You Hesitating Over AI? If So, You Are Not Alone

by David Beer

There was a prevailing idea, George Orwell wrote in a 1946 essay on the Common Toad, ‘that this is the age of machines and that to dislike the machine, or even to want to limit its domination, is backward-looking, reactionary and slightly ridiculous.’ It was only a couple of years before his surveillance society classic novel Nineteen Eighty-Four.

The aside in Orwell’s short essay captured a sense of pressure to keep-up with the technological changes of the time, a pressure to not fall behind and to not look outdated. We are feeling such pressures magnified again by the vast coverage and seemingly dramatic expansion of artificial intelligence. To not use AI, to dislike AI, to seek to limit AI, might, in Orwell’s terms, be seen to be slightly ridiculous. 

There is a pressure to turn to AI to be ever smarter, more predictive, anticipatory, ahead-of-the-game, knowing, hyper-efficient and so on. There is, as Orit Halpern and Robert Mitchell have put it, a ‘smartness mandate’. We are expected to be integrating AI into how we think, work and do things. This is partly because appearing to be algorithmic and AI savvy is equated with seeming switched-on. 

A more AI focused future can seem an inevitability. It is just too slick and the possibilities are too great for it to be held back. There is an imagined future already set out for us, an imagined ‘silicon future’ John Cheney-Lippold has recently argued, in which the future seems to already be planned out and so somehow precedes our present. At the same time this AI future is not as predictable as it might seem, when we factor in the unrest over training data access, the convoluted financial underpinnings of the AI itself, and the profoundly uncertain economic and geopolitical circumstances.

It is tempting to evoke the idea of the human as a category in response to the rise of AI. Yet it is a long time since we could pretend to ourselves that humans are discrete entities, or that we somehow exist outside of our technological environments. We are too entangled with data infrastructures to be able to separate ourselves from them. We are, Katherine Hayles tells us, part of a ‘cognitive assemblage’ in which our thinking is distributed across systems and networks.

As the same time, where algorithms and AI are integrated it nearly always creates tensions and what Minna Ruckenstein has called ‘frictions‘. The path to AI is not as smooth and slick as it might appear. In a recent project exploring the use of algorithms in risk assessment and access decisions in housing, funded by the Nuffield Foundation, on which I was part of the research team, one thing we found to be prevalent was a form of automation hesitancy

Automation hesitancy occurs where there are confidence deficits in the algorithmic systems or AI being implemented. People are actually cautious and tentative in the face of AI. The presence of AI and other algorithms systems can create profound uncertainties, with people unsure how to react, how to respond, how to use them and in what ways. Questions arise about where the limits should be and what is appropriate for AI use. And so on. Algorithmic decision-making and AI create questions for people whenever they encounter them, not least because they can challenge established practices, expertise, status and knowledge. The algorithms are in tensions with established limits of knowledge and human input. The result is that AI is adopted unevenly and with varied pace and range. There are subtleties to how people integrate algorithms. 

Despite the hesitations, we found a strong sense that there was an expectation or even an inevitability of more automation to come in the future. This was already being imagined, and was shaping behaviours in the present. With people then trying to work out how to fit into those imagined future scenes. This future was itself in tension with the actualities of the present.

I’m sure that this automation hesitancy is more widespread than we might expect. In our project we were focusing on one sector, but the feeling is likely to appear in lots of other places. You may have found yourself hesitating. The reasons for this may not be simple, and may not even be something you can articulate. What is clear though, is if you are hesitant about AI you are far from alone.

Orwell might have encouraged you to reflect on the pressure leading to the use of AI. Let me conclude by turning to another literary figure. Last year I was doing some research for a piece telling the story of the novelist J.G. Ballard commissioning computer generated poetry for a literary magazine in the 1970s. I’d started that piece after stumbling on a brief mention of the generated texts in his autobiography. I dug out the poems from the archive to see what they were like. The task seemed a useful antidote to the heated coverage of the present generative systems. Whilst working on that piece. I read a short story by Ballard from 1961 entitled ‘Studio 5, The Stars’. In Ballard’s tale poetry had come to be written by machines. Poets simply input information using a set of control dials and the ‘VT’ machines would jump into action producing endless poems. It was a luxurious world. As one colleague of the imagined literary editor in the story pointed out:

‘Fifty years ago a few people wrote poetry, but no one read it. Now no one writes it either. The VT set merely simplifies the whole process.’

Soaked in a cold irony, Ballard’s story seems to be suggesting, over 60 years ago, that when the time comes we should be careful what we automate and be cautious of the consequences it might have for how we think and what we are capable of doing. When all of the VT machines were smashed in the latter part of Ballard’s story, the poets soon realise that they can no longer actually create written works. It is not that we shouldn’t use AI, of course, but that we might also sometimes hesitate and question the desires and ideals underpinning our will to automate.

Enjoying the content on 3QD? Help keep us going by donating now.