The Line: AI And The Future Of Personhood

by Mark R. DeLong

The cover of The Line: AI and the Future of Personhood by James Boyle shows a human head-shaped form in deep blue with a lattice of white lines connecting white dots, like a net or a network. A turquoise background with vertical white lines glows behind the featureless head. In the middle of the image, the title and the author's name are listed in horizontal yellow bars. The typeface is sans serif, with the title spelled in all capital letters.
Cover of The Line: AI and the Future of Personhood by James Boyle. The MIT Press and the Duke University TOME program have released the book using a Creative Commons CC BY-NC-SA license. The book is free to download and to reissue, augment, or alter following the license requirements. It can be downloaded here: https://doi.org/10.7551/mitpress/15408.001.0001.

Duke law professor James Boyle said an article on AI personhood gave him some trouble. When he circulated it over a decade ago, he recalled, “Most of the law professors and judges who read it were polite enough to say the arguments were thought provoking, but they clearly thought the topic was the purest kind of science fiction, idle speculation devoid of any practical implication in our lifetimes.” Written in 2010, the article, “Endowed by Their Creator?: The Future of Constitutional Personhood,” made its way online in March 2011 and appeared in print later that year. Now, thirteen years later, Boyle’s “science fiction” of personhood has shed enough fiction and fantasy to become worryingly plausible, and Boyle has refined and expanded his ideas in that 2011 article into a new thoughtful and compelling book.

In the garb of Large Language Models and Deep Learning, Artificial Intelligence has shocked us with their uncanny fluency, even though we “know” that under the hood the sentences come from clanky computerized mechanisms, a twenty-first century version of the Mechanical Turk. ChatGPT’s language displays only the utterance of a “stochastic parrot,” to use Emily Bender’s label. Yet, despite knowing the absence of a GPT’ed self or computerized consciousness, we can’t help but be amazed or even a tad threatened when an amiable ChatGPT, Gemini, or other chatbot responds to our “prompt” with (mostly) clear prose. We might even fantasize that there’s a person in there, somewhere.

Boyle’s new book, The Line: AI and the Future of Personhood (The MIT Press, 2024) forecasts contours of arguments, both legal and moral, that are likely to trace new boundaries of personhood. “There is a line,” he writes in his introduction. “It is a line that separates persons—entities with moral and legal rights—from nonpersons, things, animals, machines—stuff we can buy, sell, or destroy. In moral and legal terms, it is the line between subject and object.”

The line, Boyle claims, will be redrawn. Freshly, probably incessantly, argued. Messily plotted and retraced. Read more »



Monday, February 6, 2023

Technology: Instrumental, Determining, or Mediating?

by Fabio Tollon

DALL·E generated image with the prompt "Impressionist oil painting disruptive technology"
DALL·E generated image with the prompt “Impressionist oil painting disruptive technology”

We take words quite seriously. We also take actions quite seriously. We don’t take things as seriously, but this is changing.

We live in a society where the value of a ‘thing’ is often linked to, or determined by, what it can do or what it can be used for. Underlying this is an assumption about the value of “things”: their only value consists in the things they can do. Call this instrumentalism. Instrumentalism, about technology more generally, is an especially intuitive idea. Technological artifacts (‘things’) have no agency of their own, would not exist without humans, and therefore are simply tools that are there to be used by us. Their value lies in how we decide to use them, which opens up the possibility of radical improvement to our lives. Technology is a neutral means with which we can achieve human goals, whether these be good or evil.

In contrast to this instrumentalist view there is another view on technology, which claims that technology is not neutral at all, but that it instead has a controlling or alienating influence on society. Call this view technological determinism. Such determinism regarding technology is often justified by, well, looking around. The determinist thinks that technological systems take us further away from an ‘authentic’ reality, or that those with power develop and deploy technologies in ways that increase their ability to control others.

So, the instrumentalist view sees some promise in technology, and the determinist not so much. However, there is in fact a third way to think about this issue: mediation theory. Dutch philosopher Peter-Paul Verbeek, drawing on the postphenomenological work of Don Ihde, has proposed a “thingy turn” in our thinking about the philosophy of technology. This we can call the mediation account of technology. This takes us away from both technological determinism and instrumentalism. Here’s how. Read more »

Monday, July 5, 2021

How Can We Be Responsible For the Future of AI?

by Fabio Tollon 

Are we responsible for the future? In some very basic sense of responsibility we are: what we do now will have a causal effect on things that happen later. However, such causal responsibility is not always enough to establish whether or not we have certain obligations towards the future.  Be that as it may, there are still instances where we do have such obligations. For example, our failure to adequately address the causes of climate change (us) will ultimately lead to future generations having to suffer. An important question to consider is whether we ought to bear some moral responsibility for future states of affairs (known as forward-looking, or prospective, responsibility). In the case of climate change, it does seem as though we have a moral obligation to do something, and that should we fail, we are on the hook. One significant reason for this is that we can foresee that our actions (or inactions) now will lead to certain desirable or undesirable consequences. When we try and apply this way of thinking about prospective responsibility to AI, however, we might run into some trouble.

AI-driven systems are often by their very nature unpredictable, meaning that engineers and designers cannot reliably foresee what might occur once the system is deployed. Consider the case of machine learning systems which discover novel correlations in data. In such cases, the programmers cannot predict what results the system will spit out. The entire purpose of using the system is so that it can uncover correlations that are in some cases impossible to see with only human cognitive powers. Thus, the threat seems to come from the fact that we lack a reliable way to anticipate the consequences of AI, which perhaps make us being responsible for it, in a forward-looking sense, impossible.

Essentially, the innovative and experimental nature of AI research and development may undermine the relevant control required for reasonable ascriptions of forward-looking responsibility. However, as I hope to show, when we reflect on technological assessment more generally, we may come to see that just because we cannot predict future consequences does not necessary mean there is a “gap” in forward looking obligation. Read more »