Putting the “Art” into Artificial Intelligence, or Banksy on Steroids

by Malcolm Murray

As I discussed AI with my cabdriver on a recent trip to Vienna, I was reminded of the fact that the German word for AI is Künstliche Intelligenz. This shares an etymological root with the word Kunst, German for Art. This made me realize that of course we have that in English too, the “Art” is right there in Artificial Intelligence, we have just overlooked it.

The words have diverged over time, from their common Proto-Germanic root, but I think their commonality is an interesting perspective to consider. Given the breadth of likely impact of AI on society, we need to analyze it from many different lenses and perspectives. So apply the “art lens” and being more cognizant of the “Art” in Artificial Intelligence could be helpful for making sense of some of the dissonant patterns AI presents us with.

First, we have the strange and somewhat discombobulating fact that AI is, as Ethan Mollick famously pointed out, neither software nor people. It is something else, a third category, with elements of both. We are used to people, and Kahneman and others have taught us how the ways in which they are fallible and make mistakes. They are flexible, but have many biases. We are used to software and we know its strengths and weaknesses. It delivers predictable performance, but with a limited range. What we have not yet encountered is fallible software. Like Ethan says, this is a new category which will take time to get used to. This partly explains enterprise adoption which is still slow, since that third category does not have a natural home in an organization. Processes will have to be changed drastically to allow this third category to fully come to its right. Seeing AI as more as of an art than a science helps put this in the right perspective.

Second, we have the difficulties we are facing in designing risk management for AI. When we look to apply techniques from other high-risk industries, we increasingly recognize that they have to be altered, sometimes significantly, in order to apply to AI. This is often a surprising outcome. We have decades of experience from safety-critical industries such as nuclear power, aviation, maritime transport and chemical industries. In HROs (highly-reliability organizations), such as hospitals and air-traffic control, the environments have been refined to the point where error rates are vanishingly low. One would think there presume that there would be established routines, processes and metrics that could easily be transported over to AI. However, AI tends to require new and innovative solutions. This again makes more sense if we see it as having aspects of an art as well as a science. Science is something we (mostly) know how to regulate, but that is not the case for art, where regulation has not been necessary outside of autocratic societies.

Finally, one could actually see the introduction of AI into society over the past few years as something akin to art itself. I don’t mean here the ability of AI to create art, even though the recent Scott Alexander experiment showed that AI art seems to already be fairly at par with human-made art. Rather I’m thinking about AI itself as art. Ken Liu recently argued in Noema that AI is a new artistic medium. He says we should think of AI as “a new kind of machine that captures something and plays back…the content of thought, or subjectivity”. Liu calls it “a “Noematograph” (analogous to the cinematograph…but for thought”). This is an interesting way to put it (much more interesting than the take from his normally more inventive fellow science-fiction writer Ted Chiang who famously dismissed AI as a “blurry jpeg of the web“). Seen this way, the sudden burst on the scene of ChatGPT (which, as you’ll recall was just an experiment at OpenAI that they didn’t expect much of) three years ago can be seen as the start of a performance art installation.

Performance art (especially its guerilla art subset – think Banksy) often takes the shape of an artist inserting something into society (such as adding images or sculptures to a sidewalk) to see what reactions or changes it leads to. The large-scale introduction of generative AI into society’s processes can similarly be seen as a piece of performance art, just on a vastly larger scale. This perspective could put the recent astronomical salaries for AI researchers (e.g. by Meta) into the right perspective. Perhaps it is logical after all that AI researchers should be paid hundreds of millions, if we seem them as high-performing artists in the same vein as music or movie stars.

The phrase used by many commentators in AI is that “AI is grown, not built”. Like the blind watchmaker of evolution, we are growing something we do not know how it works. This means unprecedented degrees of freedom in what can result from these efforts. Remembering that AI is an art and not a science might be a useful perspective to take to remind us that it is natural that its adoption takes time, that we need to rethink regulation from the ground up and that we are all just part of the audience for the OpenAI show.

Enjoying the content on 3QD? Help keep us going by donating now.