The Line: AI And The Future Of Personhood

by Mark R. DeLong

The cover of The Line: AI and the Future of Personhood by James Boyle shows a human head-shaped form in deep blue with a lattice of white lines connecting white dots, like a net or a network. A turquoise background with vertical white lines glows behind the featureless head. In the middle of the image, the title and the author's name are listed in horizontal yellow bars. The typeface is sans serif, with the title spelled in all capital letters.
Cover of The Line: AI and the Future of Personhood by James Boyle. The MIT Press and the Duke University TOME program have released the book using a Creative Commons CC BY-NC-SA license. The book is free to download and to reissue, augment, or alter following the license requirements. It can be downloaded here: https://doi.org/10.7551/mitpress/15408.001.0001.

Duke law professor James Boyle said an article on AI personhood gave him some trouble. When he circulated it over a decade ago, he recalled, “Most of the law professors and judges who read it were polite enough to say the arguments were thought provoking, but they clearly thought the topic was the purest kind of science fiction, idle speculation devoid of any practical implication in our lifetimes.” Written in 2010, the article, “Endowed by Their Creator?: The Future of Constitutional Personhood,” made its way online in March 2011 and appeared in print later that year. Now, thirteen years later, Boyle’s “science fiction” of personhood has shed enough fiction and fantasy to become worryingly plausible, and Boyle has refined and expanded his ideas in that 2011 article into a new thoughtful and compelling book.

In the garb of Large Language Models and Deep Learning, Artificial Intelligence has shocked us with their uncanny fluency, even though we “know” that under the hood the sentences come from clanky computerized mechanisms, a twenty-first century version of the Mechanical Turk. ChatGPT’s language displays only the utterance of a “stochastic parrot,” to use Emily Bender’s label. Yet, despite knowing the absence of a GPT’ed self or computerized consciousness, we can’t help but be amazed or even a tad threatened when an amiable ChatGPT, Gemini, or other chatbot responds to our “prompt” with (mostly) clear prose. We might even fantasize that there’s a person in there, somewhere.

Boyle’s new book, The Line: AI and the Future of Personhood (The MIT Press, 2024) forecasts contours of arguments, both legal and moral, that are likely to trace new boundaries of personhood. “There is a line,” he writes in his introduction. “It is a line that separates persons—entities with moral and legal rights—from nonpersons, things, animals, machines—stuff we can buy, sell, or destroy. In moral and legal terms, it is the line between subject and object.”

The line, Boyle claims, will be redrawn. Freshly, probably incessantly, argued. Messily plotted and retraced. Read more »



Monday, October 28, 2024

What Would An AI Treaty Between Countries Look Like?

by Ashutosh Jogalekar

A stamp commemorating the Atoms for Peace program inaugurated by President Dwight Eisenhower. An AI For Peace program awaits (Image credit: International Peace Institute)

The visionary physicist and statesman Niels Bohr once succinctly distilled the essence of science as “the gradual removal of prejudices”. Among these prejudices, few are more prominent than the belief that nation-states can strengthen their security by keeping critical, futuristic technology secret. This belief was dispelled quickly in the Cold War, as nine nuclear states with competent scientists and engineers and adequate resources acquired nuclear weapons, leading to the nuclear proliferation that Bohr, Robert Oppenheimer, Leo Szilard and other far-seeing scientists had warned political leaders would ensue if the United States and other countries insisted on security through secrecy. Secrecy, instead of keeping destructive nuclear technology confined, had instead led to mutual distrust and an arms race that, octopus-like, had enveloped the globe in a suicide belt of bombs which at its peak numbered almost sixty thousand.

But if not secrecy, then how would countries achieve the security they craved? The answer, as it counterintuitively turned out, was by making the world a more open place, by allowing inspections and crafting treaties that reduced the threat of nuclear war. Through hard-won wisdom and sustained action, politicians, military personnel and ordinary citizens and activists realized that the way to safety and security was through mutual conversation and cooperation. That international cooperation, most notably between the United States and the Soviet Union, achieved the extraordinary reduction of the global nuclear stockpile from tens of thousands to about twelve thousand, with the United States and Russia still accounting for more than ninety percent.

A similar potential future of promise on one hand and destruction on the other awaits us through the recent development of another groundbreaking technology: artificial intelligence. Since 2022, AI has shown striking progress, especially through the development of large language models (LLMs) which have demonstrated the ability to distill large volumes of knowledge and reasoning and interact in natural language. Accompanied by their reliance on mountains of computing power, these and other AI models are posing serious questions about the possibility of disrupting entire industries, from scientific research to the creative arts. More troubling is the breathless interest from governments across the world in harnessing AI for military applications, from smarter drone targeting to improved surveillance to better military hardware supply chain optimization. 

Commentators fear that massive interest in AI from the Chinese and American governments in particular, shored up by unprecedented defense budgets and geopolitical gamesmanship, could lead to a new AI arms race akin to the nuclear arms race. Like the nuclear arms race, the AI arms race would involve the steady escalation of each country’s AI capabilities for offense and defense until the world reaches an unstable quasi-equilibrium that would enable each country to erode or take out critical parts of their adversary’s infrastructure and risk their own. Read more »

Sunday, October 20, 2024

Forget Turing, it’s the Tolkien test for AI that matters

by John Hartley

With CAPTHCHA the latest stronghold to be breeched, following the heralded sacking of Turing’s temple, I propose a new standard for AI: The Tolkien test.

In this proposed schema, AI capability would be tested against what Andrew Pinsent terms ‘the puzzle of useless creation’. Pinsent, a leading authority on science and religion asks, concerning Tolkien: “What is the justification for spending so much time creating an entire family of imaginary languages for imaginary peoples in an imaginary world?”

Tolkien’s view of sub-creation framed human creativity as an act of co-creation with God. Just as the divine imagination shaped the world, so too does human imagination—though on a lesser scale—shape its own worlds. This, for Tolkien, was not mere artistic play but a serious, borderline sacred act. Tolkien’s works, Middle-earth in particular, were not an escape from reality, but a way of penetrating reality in the most acute sense.

For Tolkien, fantasia illuminated reality insofar is it tapped into the metaphysical core of things. The the artistic creation predicated on the creative imagination opened the individual to an alternate mode of knowledge, deeply intuitive and discursive in nature. Tolkien saw this creative act as deeply rational, not a fanciful indulgence. Echoing the Thomist tradition, he viewed fantasy as a way of refashioning the world that the divine had made, for only through the imagination is the human mind capable of reaching beyond itself.

The role of the creative imagination, then, is not to offer a mere replication of life but to transcend it. Here is the major test for AI, for in doing so, it accesses what Tolkien called the “real world”—the world beneath the surface of things. As faith seeks enchantment, so too does art seek a kind of conversion of the imagination, guiding it towards the consolation of eternal memory, what Plato termed ‘anamnesis’. Read more »

Monday, April 22, 2024

The Irises Are Blooming Early This Year

by William Benzon

I live in Hoboken, New Jersey, across the Hudson River from Midtown Manhattan. I have been photographing the irises in the Eleventh Street flower beds since 2011. So far I have uploaded 558 of those photos to Flickr.

I took most of those photos in May or June. But there is one from April 30, 2021, and three from April 29, 2022. I took the following photograph on Monday, April 15, 2024 at 4:54 PM (digital cameras can record the date and time an image was taken). Why so early in April? Random variation in the weather I suppose.

Irises on the street in Hoboken.

That particular photo is an example of what I like to call the “urban pastoral,” I term I once heard applied to Hart Crane’s The Bridge.

Most of my iris photographs, however, do not include enough context to justify that label. They are just photographs of irises. I took this one on Friday, April 19, 2024 at 3:23 PM. Read more »

Monday, May 29, 2023

Responsibility Gaps: A Red Herring?

by Fabio Tollon

What should we do in cases where increasingly sophisticated and potentially autonomous AI-systems perform ‘actions’ that, under normal circumstances, would warrant the ascription of moral responsibility? That is, who (or what) is responsible when, for example, a self-driving car harms a pedestrian? An intuitive answer might be: Well, it is of course the company who created the car who should be held responsible! They built the car, trained the AI-system, and deployed it.

However, this answer is a bit hasty. The worry here is that the autonomous nature of certain AI-systems means that it would be unfair, unjust, or inappropriate to hold the company or any individual engineers or software developers responsible. To go back to the example of the self-driving car; it may be the case that due to the car’s ability to act outside of the control of the original developers, their responsibility would be ‘cancelled’, and it would be inappropriate to hold them responsible.

Moreover, it may be the case that the machine in question is not sufficiently autonomous or agential for it to be responsible itself. This is certainly true of all currently existing AI-systems and may be true far into the future. Thus, we have the emergence of a ‘responsibility gap’: Neither the machine nor the humans who developed it are responsible for some outcome.

In this article I want to offer some brief reflections on the ‘problem’ of responsibility gaps. Read more »

Monday, April 17, 2023

Building a Dyson sphere using ChatGPT

by Ashutosh Jogalekar

Artist’s rendering of a Dyson sphere (Image credit)

In 1960, physicist Freeman Dyson published a paper in the journal Science describing how a technologically advanced civilization would make its presence known. Dyson’s assumption was that whether an advanced civilization signals its intelligence or hides it from us, it would not be able to hide the one thing that’s essential for any civilization to grow – energy. Advanced civilizations would likely try to capture all the energy of their star to grow.

For doing this, borrowing an idea from Olaf Stapledon, Dyson imagined the civilization taking apart a number of the planets and other material in their solar system to build a shell of material that would fully enclose their planet, thus capturing far more of the heat than what they could otherwise. This energy-capturing sphere would radiate its enormous waste heat out in the infrared spectrum. So one way to find out alien civilizations would be to look for signatures of this infrared radiation in space. Since then these giant spheres – later sometimes imagined as distributed panels rather than single continuous shells – that can be constructed by advanced civilizations to capture their star’s energy have become known as Dyson spheres. They have been featured in science fiction books and TV shows including Star Trek.

I asked AI engine chatGPT to build me a hypothetical 2 meter thick Dyson sphere at a distance of 2 AU (~300 million kilometers). I wanted to see how efficiently chatGPT harnesses information from the internet to give me specifics and how well its large language model (LLM) of computation understood what I was saying. Read more »

Monday, November 14, 2022

Hyperintelligence: Art, AI, and the Limits of Cognition

by Jochen Szangolies

Deep Blue, at the Computer History Museum in California. Image Credit: James the photographer, CC BY 2.0, via Wikimedia Commons

On May 11, 1997, chess computer Deep Blue dealt then-world chess champion Garry Kasparov a decisive defeat, marking the first time a computer system was able to defeat the top human chess player in a tournament setting. Shortly afterwards, AI chess superiority firmly established, humanity abandoned the game of chess as having now become pointless. Nowadays, with chess engines on regular home PCs easily outsmarting the best humans to ever play the game, chess has become relegated to a mere historical curiosity and obscure benchmark for computational supremacy over feeble human minds.

Except, of course, that’s not what happened. Human interest in chess has not appreciably waned, despite having had to cede the top spot to silicon-based number-crunchers (and the alleged introduction of novel backdoors to cheating). This echoes a pattern well visible throughout the history of technological development: faster modes of transportation—by car, or even on horseback—have not eliminated human competitive racing; great cranes effortlessly raising tonnes of weight does not keep us from competitively lifting mere hundreds of kilos; the invention of photography has not kept humans from drawing realistic likenesses.

Why, then, worry about AI art? What we value, it seems, is not performance as such, but specifically human performance. We are interested in humans racing or playing each other, even in the face of superior non-human agencies. Should we not expect the same pattern to continue: AI creates art equal to or exceeding that of its human progenitors, to nobody’s great interest? Read more »

Monday, August 1, 2022

Acting Machines

by Fabio Tollon

Fritzchens Fritz / Better Images of AI / GPU shot etched 1 / CC-BY 4.0

Machines can do lots of things. Robotic arms can help make our cars, autonomous cars can drive us around, and robotic vacuums can clean our floors. In all of these cases it seems natural to think that these machines are doing something. Of course, a ‘doing’ is a kind of happening: when something is done, usually something happens, namely, an event. Brushing my teeth, going for a walk, and turning on the light are all things that I do, and when I do them, something happens (events). We might think the same thing about robotic arms, autonomous vehicles, and robotic vacuum cleaners. All these systems seem to be doing something, which then leads to an event occurring.  However, in the case of humans, we often think of what we do in terms of agency: when we do perform an action things are not just happening (in a passive sense). Rather, we are acting, we are exercising our agency, we are agents. Can machines be agents? Is there something like artificial agency? Well, as with most things in philosophy, it depends.

Agency, in its human form, is usually about our mental states. It therefore seems natural to think that in order for something or other to be an agent, it should at least in principle have something like mental states (in the form of, for example, beliefs and desires). More than this, in order for an action to be properly attributable to an agent we might insist that the action they perform be caused by their mental states. Thus, we might say that for an entity to be considered an agent it should be possible to explain their behaviour by referring to their mental states. Read more »

Monday, July 25, 2022

Clever Cogs: Ants, AI, And The Slippery Idea Of Intelligence

by Jochen Szangolies

Figure 1: The Porphyrian Tree. Detail of a fresco at the Kloster Schussenried. Image credit: modified from Franz Georg Hermann, Public domain, via Wikimedia Commons.

The arbor porphyriana is a scholastic system of classification in which each individual or species is categorized by means of a sequence of differentiations, going from the most general to the specific. Based on the categories of Aristotle, it was introduced by the 3rd century CE logician Porphyry, and a huge influence on the development of medieval scholastic logic. Using its system of differentiae, humans may be classified as ‘substance, corporeal, living, sentient, rational’. Here, the lattermost term is the most specific—the most characteristic of the species. Therefore, rationality—intelligence—is the mark of the human.

However, when we encounter ‘intelligence’ in the news, these days, chances are that it is used not as a quintessentially human quality, but in the context of computation—reporting on the latest spectacle of artificial intelligence, with GPT-3 writing scholarly articles about itself or DALL·E 2 producing close-to-realistic images from verbal descriptions. While this sort of headline has become familiar, lately, a new word has risen in prominence at the top of articles in the relevant publications: the otherwise innocuous modifier ‘general’. Gato, a model developed by DeepMind, we’re told is a ‘generalist’ agent, capable of performing more than 600 distinct tasks. Indeed, according to Nando de Freitas, team lead at DeepMind, ‘the game is over’, with merely the question of scale separating current models from truly general intelligence.

There are several interrelated issues emerging from this trend. A minor one is the devaluation of intelligence as the mark of the human: just as Diogenes’ plucked chicken deflates Plato’s ‘featherless biped’, tomorrow’s AI models might force us to rethink our self-image as ‘rational animals’. But then, arguably, Twitter already accomplishes that.

Slightly more worrying is a cognitive bias in which we take the lower branches of Porphyry’s tree to entail the higher ones. Read more »

Monday, December 20, 2021

Does AI Need Free Will to be held Responsible?

by Fabio Tollon

We have always been a technological species. From the use of basic tools to advanced new forms of social media, we are creatures who do not just live in the world but actively seek to change it. However, we now live in a time where many believe that modern technology, especially advances driven by artificial intelligence (AI), will come to challenge our responsibility practices. Digital nudges can remind you of your mother’s birthday, ToneCheck can make sure you only write nice emails to your boss, and your smart fridge can tell you when you’ve run out of milk. The point is that our lives have always been enmeshed with technology, but our current predicament seems categorically different from anything that has come before. The technologies at our disposal today are not merely tools to various ends, but rather come to bear on our characters by importantly influencing many of our morally laden decisions and actions.

One way in which this might happen is when sufficiently autonomous technology “acts” in such a way as to challenge our usual practices of ascribing responsibility. When an AI system performs an action that results in some event that has moral significance (and where we would normally deem it appropriate to attribute moral responsibility to human agents) it seems natural that people would still have emotional responses in these situations. This is especially true if the AI is perceived as having agential characteristics. If a self-driving car harms a human being, it would be quite natural for bystanders to feel anger at the cause of the harm. However, it seems incoherent to feel angry at a chunk of metal, no matter how autonomous it might be.

Thus, we seem to have two questions here: the first is whether our responses are fitting, given the situation. The second is an empirical question of whether in fact people will behave in this way when confronted with such autonomous systems. Naturally, as a philosopher, I will try not to speculate too much with respect to the second question, and thus what I say here is mostly concerned with the first. Read more »

Monday, August 30, 2021

Irrationality, Artificial Intelligence, and the Climate Crisis

by Fabio Tollon

Human beings are rather silly creatures. Some of us cheer billionaires into space while our planet burns. Some of us think vaccines cause autism, that the earth is flat, that anthropogenic climate change is not real, that COVID-19 is a hoax, and that diamonds have intrinsic value. Many of us believe things that are not fully justified, and we continue to believe these things even in the face of new evidence that goes against our position. This is to say, many people are woefully irrational. However, what makes this state of affairs perhaps even more depressing is that even if you think you are a reasonably well-informed person, you are still far from being fully rational. Decades of research in social psychology and behavioural economics has shown that not only are we horrific decision makers, we are also consistently horrific. This makes sense: we all have fairly similar ‘hardware’ (in the form of brains, guts, and butts) and thus it follows that there would be widely shared inconsistencies in our reasoning abilities.

This is all to say, in a very roundabout way, we get things wrong. We elect the wrong leaders, we believe the wrong theories, and we act in the wrong ways. All of this becomes especially disastrous in the case of climate change. But what if there was a way to escape this tragic epistemic situation? What if, with the use of an AI-powered surveillance state, we could simply make it impossible for us to do the ‘wrong’ things? As Ivan Karamazov notes in the tale of The Grand Inquisitor (in The Brothers Karamzov by Dostoevsky), the Catholic Church should be praised because it has “vanquished freedom… to make men happy”. By doing so it has “satisfied the universal and everlasting craving of humanity – to find someone to worship”. Human beings are incapable of managing their own freedom. We crave someone else to tell us what to do, and, so the argument goes, it would be in our best interest to have an authority (such as the Catholic Church, as in the original story) with absolute power ruling over us. This, however, contrasts sharply with liberal-democratic norms. My goal is to show that we can address the issues raised by climate change without reinventing the liberal-democratic wheel. That is, we can avoid the kind of authoritarianism dreamed up by Ivan Karamazov. Read more »

Monday, July 5, 2021

How Can We Be Responsible For the Future of AI?

by Fabio Tollon 

Are we responsible for the future? In some very basic sense of responsibility we are: what we do now will have a causal effect on things that happen later. However, such causal responsibility is not always enough to establish whether or not we have certain obligations towards the future.  Be that as it may, there are still instances where we do have such obligations. For example, our failure to adequately address the causes of climate change (us) will ultimately lead to future generations having to suffer. An important question to consider is whether we ought to bear some moral responsibility for future states of affairs (known as forward-looking, or prospective, responsibility). In the case of climate change, it does seem as though we have a moral obligation to do something, and that should we fail, we are on the hook. One significant reason for this is that we can foresee that our actions (or inactions) now will lead to certain desirable or undesirable consequences. When we try and apply this way of thinking about prospective responsibility to AI, however, we might run into some trouble.

AI-driven systems are often by their very nature unpredictable, meaning that engineers and designers cannot reliably foresee what might occur once the system is deployed. Consider the case of machine learning systems which discover novel correlations in data. In such cases, the programmers cannot predict what results the system will spit out. The entire purpose of using the system is so that it can uncover correlations that are in some cases impossible to see with only human cognitive powers. Thus, the threat seems to come from the fact that we lack a reliable way to anticipate the consequences of AI, which perhaps make us being responsible for it, in a forward-looking sense, impossible.

Essentially, the innovative and experimental nature of AI research and development may undermine the relevant control required for reasonable ascriptions of forward-looking responsibility. However, as I hope to show, when we reflect on technological assessment more generally, we may come to see that just because we cannot predict future consequences does not necessary mean there is a “gap” in forward looking obligation. Read more »

Monday, June 14, 2021

The ethics of regulating AI: When too much may be bad

by Ashutosh Jogalekar

Areopagitica‘ was a famous speech delivered by the poet John Milton in the English Parliament in 1644, arguing for the unlicensed printing of books. It is one of the most famous speeches in favor of freedom of expression. Milton was arguing against a parliamentary ordinance requiring authors to get a license for their works before they could be published. Delivered during the height of the English Civil War, Milton was well aware of the power of words to inspire as well as incite. He said,

For books are not absolutely dead things, but do preserve as in a vial the purest efficacy and extraction of that living intellect that bred them. I know they are as lively, and as vigorously productive, as those fabulous Dragon’s teeth; and being sown up and down, may chance to spring up armed men…

What Milton was saying is not that books and words can never incite, but that it would be folly to restrict or ban them before they have been published. This appeal toward withholding restraint before publication found its way into the United States Constitution and has been a pillar of freedom of expression and the press since.

Why was Milton opposed to pre-publication restrictions on books? Not just because he realized that it was a matter of personal liberty, but because he realized that restricting a book’s contents means restricting the very power of the human mind to come up with new ideas. He powerfully reminded Parliament,

Who kills a man kills a reasonable creature, God’s image; but he who destroys a good book, kills reason itself, kills the image of God, as it were, in the eye. Many a man lives a burden to the earth; but a good book is the precious lifeblood of a master spirit, embalmed and treasured up on purpose to a life beyond life.

Milton saw quite clearly that the problem with limiting publication is in significant part a problem with trying to figure out all the places a book can go. The same problem arises with science. Read more »

Monday, November 16, 2020

The Lobster and the Octopus: Thinking, Rigid and Fluid

by Jochen Szangolies

Fig. 1: The lobster exhibiting its signature move, grasping and cracking the shell of a mussel. Still taken from this video.

Consider the lobster. Rigidly separated from the environment by its shell, the lobster’s world is cleanly divided into ‘self’ and ‘other’, ‘subject’ and ‘object’. One may suspect that it can’t help but conceive of itself as separated from the world, looking at it through its bulbous eyes, probing it with antennae. The outside world impinges on its carapace, like waves breaking against the shore, leaving it to experience only the echo within.

Its signature move is grasping. With its pincers, it is perfectly equipped to take hold of the objects of the world, engage with them, manipulate them, take them apart. Hence, the world must appear to it as a series of discrete, well-separated individual elements—among which is that special object, its body, housing the nuclear ‘I’ within. The lobster embodies the primal scientific impulse of cracking open the world to see what it is made of, that has found its greatest expression in modern-day particle colliders. Consequently, its thought (we may imagine) must be supremely analytical—analysis in the original sense being nothing but the resolution of complex entities into simple constituents.

The lobster, then, is the epitome of the Cartesian, detached, rational self: an island of subjectivity among the waves, engaging with the outside by means of grasping, manipulating, taking apart—analyzing, and perhaps synthesizing the analyzed into new concepts, new creations. It is forever separated from the things themselves, only subject to their effects as they intrude upon its unyielding boundary. Read more »

Monday, July 20, 2020

An Electric Conversation with Hollis Robbins on the Black Sonnet Tradition, Progress, and AI, with Guest Appearances by Marcus Christian and GPT-3

by Bill Benzon

I was hanging out on Twitter the other day, discussing my previous 3QD piece (about Progress Studies) with Hollis Robbins, Dean of Arts and Humanities at Cal State at Sonoma. We were breezing along at 240 characters per message unit when, Wham! right out of the blue the inspiration hit me: How about an interview?

Thus I have the pleasure of bringing another Johns Hopkins graduate into orbit around 3QD. Hollis graduated in ’83; Michael Liss, right about the corner, in ’77; and Abbas Raza, our editor, in ’85; I’m class of  ’69. Both of us studied with and were influenced by the late Dick Macksey, a humanist polymath at Hopkins with a fabulous rare book collection. I know Michael took a course with Macksey and Abbas, alas, he missed out, but he met Hugh Kenner, who was his girlfriend’s advisor.

Robbins has also been Director of the Africana Studies program at Hopkins and chaired the Department of Humanities at the Peabody Institute. Peabody was an independent school when I took trumpet lessons from Harold Rehrig back in the early 1970s. It started dating Hopkins in 1978 and they got hitched in 1985.

And – you see – another connection. Robbins’ father played trumpet in the jazz band at Rensselaer Polytechnic Institute in the 1950s. A quarter of a century later I was on the faculty there and ventured into the jazz band, which was student run.

It’s fate I call it, destiny, kismet. [Social networks, fool!]

Robbins has published this and that all over the place, including her own poetry, and she’s worked with Henry Louis “Skip” Gates, Jr. to give us The Annotated Uncle Tom’s Cabin (2006). Not only was Uncle Tom’s Cabin a best seller in its day (mid-19th century), but an enormous swath of popular culture rests on its foundations. If you haven’t yet done so, read it.

She’s here to talk about her most recent book, just out: Forms of Contention: Influence and the African American Sonnet Tradition. Read more »

Monday, February 17, 2020

Context Collapse: A Conversation with Ryan Ruby

by Andrea Scrima

Ryan Ruby is a novelist, translator, critic, and poet who lives, as I do, in Berlin. Back in the summer of 2018, I attended an event at TOP, an art space in Neukölln, where along with journalist Ben Mauk and translator Anne Posten, his colleagues at the Berlin Writers’ Workshop, he was reading from work in progress. Ryan read from a project he called Context Collapse, which, if I remember correctly, he described as a “poem containing the history of poetry.” But to my ears, it sounded more like an academic paper than a poem, with jargon imported from disciplines such as media theory, economics, and literary criticism. It even contained statistics, citations from previous scholarship, and explanatory footnotes, written in blank verse, which were printed out, shuffled up, and distributed to the audience. Throughout the reading, Ryan would hold up a number on a sheet of paper corresponding to the footnote in the text, and a voice from the audience would read it aloud, creating a spatialized, polyvocal sonic environment as well as, to be perfectly honest, a feeling of information overload. Later, I asked him to send me the excerpt, so I could delve deeper into what he had written at a slower pace than readings typically afford—and I’ve been looking forward to seeing the finished project ever since. And now that it is, I am publishing the first suite of excerpts from Context Collapse at Statorec, where I am editor-in-chief.

Andrea Scrima: Ryan, I wonder if it wouldn’t be a good idea to start with a little context. Tell us about the overall sweep of your poem, and how, since you mainly work in prose, you began writing it.

Ryan Ruby: Thank you for this very kind introduction, Andrea! That was a particularly memorable evening for me too, as my partner was nine months pregnant at the time, and I was worried that we’d have to rush to the hospital in the middle of the reading. But you remember quite well: a poem containing the history of poetry, with a tip of the hat to Ezra Pound, of course, who described The Cantos as “a poem containing history.” Read more »

Monday, June 10, 2019

We Have To Talk

by Thomas O’Dwyer

Henri Matisse created many paintings titled 'The Conversation'. This, from 2012, is of the artist with his wife, Amélie. [Hermitage Museum, St. Petersburg, Russia].
Henri Matisse created many paintings titled ‘The Conversation’. This, from 2012, is of the artist with his wife, Amélie. [Hermitage Museum, St. Petersburg, Russia].
Alice’s Adventures in Wonderland is not so much a book of fantastic adventures as a book of conversations (and pictures). It’s right there, in the first paragraph: “What is the use of a book,” thought Alice, “without pictures or conversations?” Lewis Carroll and his illustrator John Tenniel delivered just that, a magical masterpiece of conversations and images. A contemporary reviewer said it would “belong to all the generations to come until the language becomes obsolete.” Six generations later, the language shows no sign of obsolescence, but the same cannot be said of conversations if the great oracle at Google is correct. One million hits for “the death of conversation,” it proclaims, listing a gloomy parade of studies and essays stretching back many years.

“Every visit to California convinces me that the digital revolution is over, by which I mean it is won. Everyone is connected. The New York Times has declared the death of conversation,” Simon Jenkins grumbled in The Guardian, seven years ago. Is it true, and if it is, who cares? That sounds like the start of an interesting discussion. Is daily conversation of any value and if it fades away, who’s to say the time saved can’t be better used? Robert Frost thought that “half the world is people who have something to say and can’t, and the other half who have nothing to say and keep on saying it.” Read more »

Monday, May 31, 2010

Cerebral Imperialism

Neurons The present is where the future comes to die, or more accurately, where an infinite array of possible futures all collapse into one. We live in a present where artificial intelligence hasn't been invented, despite a quarter century of optimistic predictions. John Horgan in Scientific American suggests we're a long way from developing it, despite all the optimistic predictions (although when it does come it may well be as a sudden leap into existence, a sudden achievement of critical mass). However and whenever (or if ever) it arrives, it's an idea worth discussing today. But, a question: Does this line of research suffer from “cerebral imperialism”?

___________________________________

The idea of “cerebral imperialism” came up in an interview I did for the current issue of Tricycle, a Buddhist magazine, with transhumanist professor and writer James “J” Hughes. One exchange went like this:

Eskow: There seems to be a kind of cognitive imperialism among some Transhumanists that says the intellect alone is “self.” Doesn’t saying “mind” is who we are exclude elements like body, emotion, culture, and our environment? Buddhism and neuroscience both suggest that identity is a process in which many elements co-arise to create the individual experience on a moment-by-moment basis. The Transhumanists seem to say, “I am separate, like a data capsule that can be uploaded or moved here and there.”

You’re right. A lot of our Transhumanist subculture comes out of computer science— male computer science—so a lot of them have that traditional “intelligence is everything” view. s soon as you start thinking about the ability to embed a couple of million trillion nanobots in your brain and back up your personality and memory onto a chip, or about advanced artificial intelligence deeply wedded with your own mind, or sharing your thoughts and dreams and feelings with other people, you begin to see the breakdown of the notion of discrete and continuous self.

An intriguing answer – one of many Hughes offers in the interview – but I was going somewhere else: toward the idea that cognition itself, that thing which we consider “mind,” is over-emphasized in our definition of self and therefore is projected onto our efforts to create something we call “artificial intelligence.”

Is the “society of mind” trying to colonize the societies of body and emotion?

Read more »