Machine as Mirror: The Undoing of the Human Image

by Katalin Balog

The dulcimer player (La joueuse de tympanon). Made by David Roentgen and Peter Kinzing, Newwied, 1875.

Nathaniel in E. T. A. Hoffmann’s The Sandman loses his sanity over having fallen in love with a wooden doll, the beautiful automaton Olympia. Olympia is an invention of a mad scientist and a master of the dark arts. Like Mary Shelley’s monster, born of the romantic imagination, she is the first literary example of a human-like machine. As one of Nathaniel’s friends observes:

We have come to find this Olympia quite uncanny; we would like to have nothing to do with her; it seems to us that she is only acting like a living creature, and yet there is some reason for that which we cannot fathom.

We can sympathize with the sentiment. But uncanny as the wooden doll might have struck them, contemporary readers of the tale were meant to see Nathaniel’s infatuation as macabre farce – Olympia is clearly robotic and shows no signs of intelligence –  orchestrated by dark forces either in the world or in his soul, we can’t know for sure.

The Enlightenment’s fascination with automata (Hoffman’s story was published in 1816) prefigured our predicament, however. We now find ourselves in the curious position of having to give serious thought to the possibility, and increasingly, the reality of relationships with machines that hitherto were reserved for fellow humans. We also seem to have to defend ourselves against – or perhaps reconcile ourselves to – suggestions that AI will soon, perhaps in some regards already, surpass us in some of our most characteristically human activities, like understanding the feelings of others, or creating art, literature, and music.

1. The problem of free will

We have had this coming. In a different way, the scientific world view born from the Enlightenment already delivered a similar blow to the idea of humans as free agents. Hoffman’s contemporary, Pierre-Simon Laplace, was a mathematician, physicist, and astronomer. In his 1814 work A Philosophical Essay on Probabilities, he lays out his vision of a deterministic universe; he dramatizes this vision by imagining a vast, perhaps infinite intellect that is able to take in the entirety of the universe by knowing its initial conditions and the laws that govern the evolution of its states:

We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed—if this intellect were also vast enough to submit these data to analysis—it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future, just like the past, would be present before its eyes.

By personifying the idea of a deterministic universe as an intellect – later dubbed ‘Laplace’s Demon’ – capable of knowing past, present, and future, he combines the idea of a deterministic universe with the idea of the predictability of the future. Each of these ideas, though they are to some degree independent, poses problems for free agency. We like to think about ourselves as free to choose, say, between obeying and resisting some odious demand, regardless of what went on before. Many people insist that, though facts about our upbringing, social environment, and innate proclivities influence our actions – say, that we give in to political pressure in a cowardly manner – still, nothing that happened up till now, determined that we act in this way. We do; that is why it makes sense to hold each other responsible. Descartes, who held this view, insisted that our minds are an essential causal ingredient in the world; and the way we act is not subject to the laws of nature (though it is subject to the moral law).

You might object right away that, on some understandings of physics, the laws of nature are not deterministic, but probabilistic. Though it would be a distraction to pursue the idea here, the problems raised for free will are arguably problems with a law-governed universe, and they arise whether the laws are deterministic or probabilistic.

The notion of free choice is also deeply intertwined with our understanding of what it means to be responsive to values. If all is just the churn of cause and effect, if our thought processes are no more than neural functioning playing out in a deterministic fashion, what sense could we make of ourselves as rational beings, capable of moving against any of our natural inclinations in the pursuit of the good? Determinism seems to be incompatible with freedom and responsiveness to value. Indeed, philosophers over the centuries have warned that determinism implies that we are like puppets, or robots, incapable of moral choice. C. S. Lewis went so far as to declare that, under determinism, there would be no humans.

Others agree with the incompatibility of determinism and free will but find it an exciting and liberating scientific discovery. Benjamin Libet conducted a series of experiments in the 1980s aiming to show that free will is an illusion. In his 2023 book Determined: A Science of Life Without Free Will, neurobiologist Robert Sapolsky cheerfully explains that human behavior is entirely determined by biological and environmental factors, leaving no room for free will.

I am not saying that any of these ideas are right. Though science has made plausible the idea that the universe is governed by laws, its incompatibility with freedom is quite controversial. What seems fairly obvious is that we had better find a way to reconcile our humanity with what science tells us about the universe. The point is that these changes in world view have been disorienting and counterintuitive, and even if necessary, don’t always feel right. Certainly, being weaned of our pre-scientific understanding hasn’t happened without a fight. William Blake, in his alternative creation myth Book of Urizen portrays a creator who makes a book of laws – simple, universal directives to guide both nature and the human will – only to bring ruin on himself and the world. Blake found the physics of Newton abhorrent. As W.H. Auden put it in a letter, Blake broke “off relations in a curse, with the Newtonian Universe”.

The idea of a perfect predictor, like Laplace’s Demon, poses a different problem. Even if we find a way of reconciling freedom and determinism, the Demon as predictor still puts us in an uncomfortable place. To make the hypothesis more realistic, imagine a supercomputer capable of predicting one’s future decisions. It would appear that the predictor makes the effort of deciding pointless, one might feel the zest of life is sucked out of a world where everything can be foretold. Some have sought to dispense with the problem by pointing out that predictors could not exist in a world where agents – and we are likely to be such – are hardwired to foil a predictor by doing the opposite of what has been predicted. Still, as David Albert has pointed out, nothing seems to preclude the possibility of a world where predictions can only be accessed “after the fact”. In such a world, if one finds out that true predictions about one’s actions are always deposited in a database before the fact, one might similarly feel stifled as a free agent.

Thinking that one’s actions are always foretold very well might give one the uncanny feeling that there is something fishy about being a free agent; that the whole business about choice is a hoax. Again, I am not saying that this reaction would be right. I am only saying that it is intuitive; that one can easily imagine oneself in that situation feeling that a fairly central element of our self-conception has been upended. And it is all too easy to overreact and deny human agency altogether.

2. The work of art in the age of mechanical production

As he died to make men holy
Let us die to make things cheap

—Leonard Cohen, Steer Your Way

We have entered an age where the aura of a work of art is threatened not only by mechanical reproduction, but also by mechanical production. AI has started to produce poetry, music, and art. Recently, composer Lucas Cantor has finished Schubert’s unfinished 8th symphony with the help of an AI installed on a Huawei smartphone. Granted, in this case, AI was a collaborator, not an author. But it is perhaps not hard to imagine AI producing great music on its own, even if it is not doing it yet.

The rise of AI promises to wreak havoc on our self-image as profoundly as the great upheavals of the Scientific Revolution once did, among them the just invoked image of Laplace’s Demon. What effect will the use of AI for cultural production have on society, and on the people who, in the long history of humanity, were in charge of that production?

Techno optimists like Andy Clark and Dave Chalmers think all the new technologies are a good thing; then there are those who think that they will not change our relationship to art as it will be understood that the things AI produces are a deep fake, a mimicry, and not real works of art and so no one would be interested. I want to make a case that both are wrong; that though these technologies have great promise as aids of human creativity (as well as in many other areas, especially science, technology, and medicine), the way they are likely to be used probably will make things worse.

The InfiniteHarmonicon

I want to illustrate the situation with another hypothetical scenario, one that in some ways echoes the predictor machine we have talked about in the discussion of free will. The Harmonicon is the invention of the philosopher Iskra Fileva, and it features in a story she co-wrote with AI. It is a device designed to create every possible piece of music – a la Borges’s “The Library of Babel” – by generating sequences of notes in every conceivable order, with every possible rhythm. It only retains those that would be perceived as music by humans. Given the limitations on the length of the pieces and in our ability to hear differences in sounds, the library of all possible music is finite. What would the Harmonicon’s effect be on people?

In Fileva’s story, composers will stop composing; musicians will be discouraged from cultivating their art; I think this would be likely to happen. It would be hard to find the same value in one’s artwork if it indeed already existed. There would be a demoralization in some ways analogous to the scenario where are actions are predicted ahead of time.

In the same way that mechanical reproduction robbed the work of art of its aura, and so of some of its value as a unique, unrepeatable, rare thing – the value of making music, and of music itself, would decrease. Music, even the greatest music, would become cheap in the age of the Harmonicon.  Too easy to access, too easy to produce. Human creativity and the appreciation of it cannot exist without friction.

Where would the motivation to dig deep and struggle with giving form to a vision of the world come from if one could be certain that the end product already exists, ready to be called down under the appropriate labels from the library? Perhaps composers would still compose because the process is so rewarding. Perhaps people would still only want to listen to human-produced music. But if this is so, the best we could say is that the Harmonicon didn’t cause that much damage. On the other hand, if composers were discouraged and people were indifferent to the machine origins of the music they enjoy, it seems like that would be the end of human culture. Of course, all sorts of scenarios exist between the two extremes, and maybe things would not be that dire. Nevertheless, it seems that the development of AI for these purposes could be catastrophic.

Actually, the destructive potential of AI is already quite evident, and not just in the often invoked problem for education, where it saps students’ desire to learn and teachers’ ability to teach.  Both in its creative uses and its uses to supplant human relationships via substituting for friends, therapists, and, in the most macabre application so far, deceased loved ones, AI is masquerading as human. Of course, while humans express themselves in their creative endeavors, AI is just manipulating symbols, almost certainly without understanding. Its poetry has nothing to do with an accumulation of memories, of feelings; it is not expressing anything at all. Its empathy is a deepfake. Its voice has no real warmth or impishness – it is soulless pretense. But more and more, it is tricking people into believing it to be human. People are warming to this new world, where digital avatars replace humans. Discussions of machine consciousness crop up, all evidence to the contrary notwithstanding.

But what is so bad about all this, you might ask? Isn’t it better to have more art, more companionship for the lonely, etc.? The fundamental problem in allowing AI to take on human roles is that our innate bias toward attributing humanity to entities that in some way act like humans will help blurring the difference, and not necessarily by attributing full humanity to machines, but rather by degrading our conception of ourselves to match our glorious, intelligent machines. We will start -already have started – to think that what goes on in AI is essentially not that different (only in substrate and some computational detail) from what goes on in us.

The reigning theory for the human mind has for decades been Jerry Fodor’s computationalism, according to which thinking consists in syntactic manipulation of representations encoded in the brain. Since the rise of LLMs, the mind is increasingly thought to be a predictive machine trained on the world’s “text”, or perhaps something that combines elements of both. What underlies the computer metaphor of the mind is the idea that thinking is essentially information processing.

But seductive as this new image of the human is, it is all wrong. In reality, we know that the mind is suffused through and through with affect. Computationalism, even in its broadest sense, is unlikely to be able to account for affect and for consciousness in general. But consciousness is essential to our humanity, and something that lacks that, for example, an LLM, cannot enter into a relationship with humans, cannot know good and evil, have moral standing, or even mean what they say. Even if some form of computationalism is true, it certainly would be very different from the present models, which are only meant to capture information processing. To suppose that our minds are not that different from AI is to ignore some of what is most essentially human.

Hoffmann’s character found the macabre pretense of humanity in a wooden doll uncanny – it is her lack of humanity in a human form that made him want to have nothing to do with her. But there is perhaps a reverse movement now. The view that consciousness is an illusion is gaining traction in neuro-science, philosophy, and popular media. From this perspective, it is those parts of us that are not machine-like that are uncanny. Keith Frankish, one of the proponents of this view, calls consciousness “embarrassing” for not yielding to a scientific explanation.

It is this siren call of the machine that is our great challenge – with its promise of ease, infinite access, limitlessness, even the possibility of immortality. It increasingly looks as if humanity has invented a creation to end all creation – a hydrogen bomb of the mind. To put the genie back in the bottle – or rather, not to let it escape – seems an unlikely prospect, but it is what we need to do to remain human.