A Look in the Mirror

MORE LOOPY LOONIES BY ANDREA SCRIMA

For the past ten years, Andrea Scrima has been working on a group of drawings entitled LOOPY LOONIES. The result is a visual vocabulary of splats, speech bubbles, animated letters, and other anthropomorphized figures that take contemporary comic and cartoon images and the violence imbedded in them as their point of departure. Against the backdrop of world political events of the past several years—war, pandemic, the ever-widening divisions in society—the drawings spell out words such as NO (an expression of dissent), EWWW (an expression of disgust), OWWW (an expression of pain), or EEEK (an expression of fear). The morally critical aspects of Scrima’s literary work take a new turn in her art and vice versa: a loss of words is countered first with visual and then with linguistic means. Out of this encounter, a series of texts ensue that explore topics such as the abuse of language, the difference between compassion and empathy, and the nature of moral contempt and disgust. 

Part I of this project can be seen and read HERE

Part II of this project can be seen and read HERE

Images from the exhibition LOOPY LOONIES at Kunsthaus Graz, Austria, can be seen HERE

 
Andrea Scrima, LOOPY LOONIES. Series of drawings 35 x 35 each, graphite on paper; edition of postcards with text excerpts. Exhibition view: Kunsthaus Graz, Austria, June 2024.

7. EEEK

Michel de Montaigne’s famous statement—“The thing I fear most is fear”—remains, nearly five hundred years later, thoroughly modern. We think of fear as an illusion, a mental trap of some kind, and believe that conquering it is essential to our personal well-being. Yet in evolutionary terms, fear is an instinctive response grounded in empirical observation and experience. Like pain, its function is self-preservation: it alerts us to the threat of very real dangers, whether immediate or imminent.

Fear can also be experienced as an indistinct existential malaise, deriving from the knowledge that misfortune inevitably happens, that we will one day die, and that prior to our death we may enter a state so weak and vulnerable that we can no longer ward off pain and misery. We think of this more generalized fear as anxiety: we can’t shake the sense that bad things—the vagueness of which render them all the more frightening—are about to befall us. The world is an inherently insecure and precarious place; according to Thomas Hobbes, “there is no such thing as perpetual Tranquillity of mind, while we live here; because life it selfe is but Motion, and can never be without Desire, nor without Fear” (Leviathan, VI). Day by day, we are confronted with circumstances that justify a response involving some degree of entirely realistic and reasonable dread and apprehension, yet anxiety is classified as a psychological disorder requiring professional therapeutic treatment. Read more »

Monday, November 4, 2024

The Line: AI And The Future Of Personhood

by Mark R. DeLong

The cover of The Line: AI and the Future of Personhood by James Boyle shows a human head-shaped form in deep blue with a lattice of white lines connecting white dots, like a net or a network. A turquoise background with vertical white lines glows behind the featureless head. In the middle of the image, the title and the author's name are listed in horizontal yellow bars. The typeface is sans serif, with the title spelled in all capital letters.
Cover of The Line: AI and the Future of Personhood by James Boyle. The MIT Press and the Duke University TOME program have released the book using a Creative Commons CC BY-NC-SA license. The book is free to download and to reissue, augment, or alter following the license requirements. It can be downloaded here: https://doi.org/10.7551/mitpress/15408.001.0001.

Duke law professor James Boyle said an article on AI personhood gave him some trouble. When he circulated it over a decade ago, he recalled, “Most of the law professors and judges who read it were polite enough to say the arguments were thought provoking, but they clearly thought the topic was the purest kind of science fiction, idle speculation devoid of any practical implication in our lifetimes.” Written in 2010, the article, “Endowed by Their Creator?: The Future of Constitutional Personhood,” made its way online in March 2011 and appeared in print later that year. Now, thirteen years later, Boyle’s “science fiction” of personhood has shed enough fiction and fantasy to become worryingly plausible, and Boyle has refined and expanded his ideas in that 2011 article into a new thoughtful and compelling book.

In the garb of Large Language Models and Deep Learning, Artificial Intelligence has shocked us with their uncanny fluency, even though we “know” that under the hood the sentences come from clanky computerized mechanisms, a twenty-first century version of the Mechanical Turk. ChatGPT’s language displays only the utterance of a “stochastic parrot,” to use Emily Bender’s label. Yet, despite knowing the absence of a GPT’ed self or computerized consciousness, we can’t help but be amazed or even a tad threatened when an amiable ChatGPT, Gemini, or other chatbot responds to our “prompt” with (mostly) clear prose. We might even fantasize that there’s a person in there, somewhere.

Boyle’s new book, The Line: AI and the Future of Personhood (The MIT Press, 2024) forecasts contours of arguments, both legal and moral, that are likely to trace new boundaries of personhood. “There is a line,” he writes in his introduction. “It is a line that separates persons—entities with moral and legal rights—from nonpersons, things, animals, machines—stuff we can buy, sell, or destroy. In moral and legal terms, it is the line between subject and object.”

The line, Boyle claims, will be redrawn. Freshly, probably incessantly, argued. Messily plotted and retraced. Read more »

Monday, February 6, 2023

Technology: Instrumental, Determining, or Mediating?

by Fabio Tollon

DALL·E generated image with the prompt "Impressionist oil painting disruptive technology"
DALL·E generated image with the prompt “Impressionist oil painting disruptive technology”

We take words quite seriously. We also take actions quite seriously. We don’t take things as seriously, but this is changing.

We live in a society where the value of a ‘thing’ is often linked to, or determined by, what it can do or what it can be used for. Underlying this is an assumption about the value of “things”: their only value consists in the things they can do. Call this instrumentalism. Instrumentalism, about technology more generally, is an especially intuitive idea. Technological artifacts (‘things’) have no agency of their own, would not exist without humans, and therefore are simply tools that are there to be used by us. Their value lies in how we decide to use them, which opens up the possibility of radical improvement to our lives. Technology is a neutral means with which we can achieve human goals, whether these be good or evil.

In contrast to this instrumentalist view there is another view on technology, which claims that technology is not neutral at all, but that it instead has a controlling or alienating influence on society. Call this view technological determinism. Such determinism regarding technology is often justified by, well, looking around. The determinist thinks that technological systems take us further away from an ‘authentic’ reality, or that those with power develop and deploy technologies in ways that increase their ability to control others.

So, the instrumentalist view sees some promise in technology, and the determinist not so much. However, there is in fact a third way to think about this issue: mediation theory. Dutch philosopher Peter-Paul Verbeek, drawing on the postphenomenological work of Don Ihde, has proposed a “thingy turn” in our thinking about the philosophy of technology. This we can call the mediation account of technology. This takes us away from both technological determinism and instrumentalism. Here’s how. Read more »

Monday, July 5, 2021

How Can We Be Responsible For the Future of AI?

by Fabio Tollon 

Are we responsible for the future? In some very basic sense of responsibility we are: what we do now will have a causal effect on things that happen later. However, such causal responsibility is not always enough to establish whether or not we have certain obligations towards the future.  Be that as it may, there are still instances where we do have such obligations. For example, our failure to adequately address the causes of climate change (us) will ultimately lead to future generations having to suffer. An important question to consider is whether we ought to bear some moral responsibility for future states of affairs (known as forward-looking, or prospective, responsibility). In the case of climate change, it does seem as though we have a moral obligation to do something, and that should we fail, we are on the hook. One significant reason for this is that we can foresee that our actions (or inactions) now will lead to certain desirable or undesirable consequences. When we try and apply this way of thinking about prospective responsibility to AI, however, we might run into some trouble.

AI-driven systems are often by their very nature unpredictable, meaning that engineers and designers cannot reliably foresee what might occur once the system is deployed. Consider the case of machine learning systems which discover novel correlations in data. In such cases, the programmers cannot predict what results the system will spit out. The entire purpose of using the system is so that it can uncover correlations that are in some cases impossible to see with only human cognitive powers. Thus, the threat seems to come from the fact that we lack a reliable way to anticipate the consequences of AI, which perhaps make us being responsible for it, in a forward-looking sense, impossible.

Essentially, the innovative and experimental nature of AI research and development may undermine the relevant control required for reasonable ascriptions of forward-looking responsibility. However, as I hope to show, when we reflect on technological assessment more generally, we may come to see that just because we cannot predict future consequences does not necessary mean there is a “gap” in forward looking obligation. Read more »