Baker/No-Baker, Thinker/No-Thinker

by Mark R. DeLong

An English baker in 1944 pours dough from a very large metal bowl. The bowl is about 2 meters in diameter and is tilting on a rack designed to make moving the bowl and pouring its contents easier.
A Modern Bakery – the Work of Wonder Bakery, Wood Green, London, England, UK, 1944.

“Computerized baking has profoundly changed the balletic physical activities of the shop floor,” Richard Sennett wrote about a Boston bakery he had visited and much later revisited. The old days (in the early 1970s) featured “balletic” ethnic Greek bakers who thrusted their hands into dough and water and baked by sight and smell. But in the 1990s, Sennett’s Boston bakers “baked” bread with the click of a mouse.1Richard Sennett reported about visits he made to the bakery about 25 years apart. The first visits took place when he and Jonathan Cobb were working on The Hidden Injuries of Class (Knopf, 1972), though Sennett and Cobb do not specifically recount the visits in their book. The second visits took place when Sennett was working on The Corrosion of Character: The Personal Consequences of Work in the New Capitalism (W.W. Norton, 1998). “Now the bakers make no physical contact with the materials or the loaves of bread, monitoring the entire process via on-screen icons which depict, for instance, images of bread color derived from data about the temperature and baking time of the ovens; few bakers actually see the loaves of bread they make.” He concludes: “As a result of working in this way, the bakers now no longer actually know how to bake bread.” [My emphasis.]

The stark contrast of Sennett’s visits, which I do not think he anticipated when he first visited in the 1970s, are stunning, and at the center of the changes are automation, changes in ownership of the bakery, and the organization of work that resulted. Technological change and organizational change—interlocked and mutually supportive, if not co-determined—reconfigured the meaning of work and the human skills that “baking” required, making the work itself stupifyingly illegible to the workers even though their tasks were less physically demanding than they had been 25 years before.

Sennett’s account of the work of baking focuses on the “personal consequences” of work in the then-new circumstances of the “new capitalism.” But I find the role of technology in the 1990s, when Microsoft Windows was remaking worklife, a particularly important feature of the story. Along with relentless consolidation of business ownership, computer technologies reset the rules of labor processes and re-centered skills. Of course, the story is not even new; the interplay of technology and work has long pressed human labor into new forms and configurations, allowing certain freedoms and delights along with new oppressions and horrors. One hopes providing more delight than horror.

Artificial intelligence will be no different, except that the panorama of action will shift. The shop floor will certainly see changes, but other changes, less focused on place, will also come about. For the Boston bakers, if they’re still at it, it may mean fewer, if any, clicks on icons, though those who “bake” may still have to empty trash cans of discarded burnt loaves (which Sennett, in the 1990s, considered “apt symbols of what has happened to the art of baking”).

In the past few weeks, researchers at Microsoft and Carnegie Mellon University reported results of a study that laid out some markers of how the use of AI influences “critical thinking” or, as I wish the authors had phrased it, how AI influences those whose job requires thinking critically. Other recent studies have received less attention, though they, too, have zeroed in on the relationship of AI use and people’s critical thinking. This study, coming from a leader of AI, drew special attention. Read more »

Footnotes

  • 1
    Richard Sennett reported about visits he made to the bakery about 25 years apart. The first visits took place when he and Jonathan Cobb were working on The Hidden Injuries of Class (Knopf, 1972), though Sennett and Cobb do not specifically recount the visits in their book. The second visits took place when Sennett was working on The Corrosion of Character: The Personal Consequences of Work in the New Capitalism (W.W. Norton, 1998).

Tuesday, February 11, 2025

What Natural Intelligence Looks Like

by Scott Samuelson

Jusepe de Ribera. Touch. c. 1615, oil on canvas. Norton Simon Museum, Pasadena. Check out the enlarged image here.

When we conjure up what thinking looks like, what tends to leap to mind is an a-ha lightbulb or a brow-furrowed chin scratch—or the sculpture The Thinker. While there’s something deservedly iconic about how Rodin depicts a powerful body redirecting its energies inward, I think that the most insightful depictions of thinking in the history of art are found in the work of Jusepe de Ribera (1591-1652), a.k.a. José de Ribera or Lo Spagnoletto (The Little Spaniard). In a time when we’re alternatively fascinated and horrified by what artificial intelligence can do, even to the point of wondering whether AIs can think or be treated like people, it’s worth asking some great Baroque paintings to remind us of what natural intelligence is.

Early in his artistic career, Ribera went to Rome and painted a series on the senses. Only four of the original five paintings survive (we suffer Hearing loss). Touch, the most interesting of the remaining paintings, depicting a blind man feeling the face of a sculpture, launches a crucial theme throughout Ribera’s work.

Let’s try to imagine Ribera in the process of making this painting. He looks at live models, probably at an actual blind man. He studies prints, sketches, fusses with his paints, maybe takes a walk. He sleeps on it. He chats with a friend and lights on an approach: a blind man exploring a sculpture by feeling it. He hurries back to his studio and begins to paint. He notices more about his subject, makes a mistake, fixes it. He holds up a jar of paint—no, that one would be better. Somewhere in this process, I imagine, it dawns on him that he’s doing the same thing as the blind man. (Maybe this is why he decides to put the painting on the table—though the painting is also a powerful visual reminder for us that there are always limits to our engagement with the world.)

Regardless of what actually went through Ribera’s head, the point I’m trying to make has been illustrated—both figuratively and literally—by a contemporary artist. In the 1990s Claude Heath was sick of the ideas of beauty that governed his artistic work. So, he lit on the idea of drawing a plaster cast of his brother’s head—blindfolded. Using small pieces of Blu-tack for orientation, one stuck into the top of the cast, one into his piece of paper, he felt the head’s contours with his left hand and drew corresponding lines with his right. “I tried not to draw what I know, but what I feel . . . I created a triangle, if you like, between me, the object, and the drawing . . . It was a bit of a transcription.” He didn’t look at what he was doing until he was finished. By liberating himself from ideas of beauty, he made beautiful drawings. Read more »

Tuesday, February 4, 2025

Your Doctor Is Like Shakespeare (And That’s A Problem)

by Kyle Munkittrick

When I think about AI, I think about poor Queen Elizabeth.

Imagine being her: you have access to Shakespeare — in his prime! You get to see a private showing of A Midsummer Night’s Dream at the height of the players’ skill and the Bard’s craft. And then… that’s it. You’ve hit the entertainment ceiling for the month. Bored? Your other options include plays by not Shakespeare, your jester, and watching animals fight to the death.

Shakespeare and his audiences were limited not by his genius but by physics. One stage, one performance, one audience at a time. Even at their peak his plays probably reached fewer people in his entire lifetime than a mediocre TikTok does before lunch.

Today we have an embarrassment of entertainment. I’m not saying Dune – Part 2 or Succession or Taylor Swift: The Eras Tour are the same as Shakespeare, but I am going to make the bold claim that they are, in fact, better than bear-baiting. My second, and perhaps bolder claim, is that AI is going to let ‘knowledge workers’ scale like entertainers can today.

Consider this tweet from Amanda Askell, the “philosopher & ethicist trying to make AI be good at Anthropic AI”:

If you can have a single AI employee, you can have thousands of AI employees. And yet the mental model for human-level AI assistants is often “I have a personal helper” rather than “I am now the CEO of a relatively large company”.

Askell is correct (she very often is, especially when you disagree with her). “I am going to be a CEO” is the mental model we should have, but it isn’t the mental model most of us have. Our mental models for human-level AI don’t quite work. There are lots of very practical predictions out there about what scaled intelligence means. I aim to make weirder ones. Read more »

Monday, January 20, 2025

Rather than OpenAI, let’s Open AI

by Ashutosh Jogalekar

In October last year, Charles Oppenheimer and I wrote a piece for Fast Company arguing that the only way to prevent an AI arms race is to open up the system. Drawing on a revolutionary early Cold War proposal for containing the spread of nuclear weapons, the Acheson-Lilienthal report, we argued that the foundational reason why security cannot be obtained through secrecy is because science and technology claim no real “secrets” that cannot be discovered if smart scientists and technologists are given enough time to find them. That was certainly the case with the atomic bomb. Even as American politicians and generals boasted that the United States would maintain nuclear supremacy for decades, perhaps forever, Russia responded with its first nuclear weapon merely four years after the end of World War II. Other countries like the United Kingdom, China and France soon followed. The myth of secrecy was shattered.

As if on cue after our article was written, in December 2024, a new large-language model (LLM) named DeepSeek v3 came out of China. DeepSeek v3 is a completely homegrown model built by a homegrown Chinese entrepreneur who was educated in China (that last point, while minor, is not unimportant: China’s best increasingly no longer are required to leave their homeland to excel). The model turned heads immediately because it was competitive with GPT-4 from OpenAI which many consider the state-of-the-art in pioneering LLM models. In fact, DeepSeek v3 is far beyond competitive in terms of critical parameters: GPT-4 used about 1 trillion training parameters, DeepSeek v3 used 671 billion; GPT-4 had 1 trillion tokens, DeepSeek v3 used almost 15 trillion. Most impressively, DeepSeek v3 cost only $5.58 million to train, while GPT-4 cost about $100 million. That’s a qualitatively significant difference: only the best-funded startups or large tech companies have $100 million to spend on training their AI model, but $5.58 million is well within the reach of many small startups.

Perhaps the biggest difference is that DeepSeek v3 is open-source while GPT-4 is not. The only other open source model from the United States is Llama, developed by Meta. If this feature of DeepSeek v3 is not ringing massive alarm bells in the heads of American technologists and political leaders, it should. It’s a reaffirmation of the central point that there are very few secrets in science and technology that cannot be discovered sooner or later by a technologically advanced country.

One might argue that DeepSeek v3 cost a fraction of the best LLM models to train because it stood on the shoulders of these giants, but that’s precisely the point: like other software, LLM models follow the standard rule of precipitously diminishing marginal cost. More importantly, the open-source, low-cost nature of DeepSeek v3 means that China now has the capability of capturing the world LLM market before the United States as millions of organizations and users make DeepSeek v3 the foundation on which to build their AI. Once again, the quest for security and technological primacy through secrecy would have proved ephemeral, just like it did for nuclear weapons.

What does the entry of DeepSeek v3 indicate in the grand scheme of things? It is important to dispel three myths and answer some key questions. Read more »

Monday, December 9, 2024

Art Or Artifice: Agency And AI Alignment

by Jochen Szangolies

The leader of the Luddites, the (possibly apocryphal) weaver Ned Ludd who is said to have broken two knitting frames in a ‘fit of rage’. Image Credit: Public Domain.

When the Luddites smashed automatic looms in protest, what they saw threatened was their livelihoods: work that had required the attention of a seasoned artisan could now be performed by much lower-skilled workers, making them more easily replaceable and thus without leverage to push back against deteriorating working conditions. Today, many employees find themselves worrying about the prospect of being replaced by AI ‘agents’ capable of both producing and ingesting large volumes of textual data in negligible time.

But the threat of AI is not ‘merely’ that of cheap labor. As depicted in cautionary tales such as The Terminator or Matrix, many perceive the true risk of AI to be that of rising up against its creators to dominate or enslave them. While there might be a bit of transference going on here, certainly there is ample reason for caution in contemplating the creation of an intelligence equal to or greater than our own—especially if it at the same time might lack many of our weaknesses, such as requiring a physical body or being tied to a single location.

Besides these two, there is another, less often remarked upon, threat to what singer-songwriter Nick Cave has called ‘the soul of the world’: art as a means to take human strife and from it craft meaning, focusing instead on the finished end product as commodity. Art is born in the artist’s struggle with the world, and this struggle gives it meaning; the ‘promise’ of generative AI is to leapfrog all of the troublesome uncertainty and strife, leaving us only with the husk of the finished product.

I believe these two issues are deeply connected: to put it bluntly, we will not solve the problem of AI alignment without giving it a soul. The ‘soul’ I am referring to here is not an ethereal substance or animating power, but simply the ability to take creative action in the world, to originate something genuinely creatively novel—true agency, which is something all current AI lacks. Meaningful choice is a precondition to both originating novel works of art and to becoming an authentic moral subject. AI alignment can’t be solved by a fixed code of moral axioms, simply because action is not determined by rational deduction, but is compelled by the affect of actually experiencing a given situation. Let’s try to unpack this. Read more »

Monday, November 4, 2024

The Line: AI And The Future Of Personhood

by Mark R. DeLong

The cover of The Line: AI and the Future of Personhood by James Boyle shows a human head-shaped form in deep blue with a lattice of white lines connecting white dots, like a net or a network. A turquoise background with vertical white lines glows behind the featureless head. In the middle of the image, the title and the author's name are listed in horizontal yellow bars. The typeface is sans serif, with the title spelled in all capital letters.
Cover of The Line: AI and the Future of Personhood by James Boyle. The MIT Press and the Duke University TOME program have released the book using a Creative Commons CC BY-NC-SA license. The book is free to download and to reissue, augment, or alter following the license requirements. It can be downloaded here: https://doi.org/10.7551/mitpress/15408.001.0001.

Duke law professor James Boyle said an article on AI personhood gave him some trouble. When he circulated it over a decade ago, he recalled, “Most of the law professors and judges who read it were polite enough to say the arguments were thought provoking, but they clearly thought the topic was the purest kind of science fiction, idle speculation devoid of any practical implication in our lifetimes.” Written in 2010, the article, “Endowed by Their Creator?: The Future of Constitutional Personhood,” made its way online in March 2011 and appeared in print later that year. Now, thirteen years later, Boyle’s “science fiction” of personhood has shed enough fiction and fantasy to become worryingly plausible, and Boyle has refined and expanded his ideas in that 2011 article into a new thoughtful and compelling book.

In the garb of Large Language Models and Deep Learning, Artificial Intelligence has shocked us with their uncanny fluency, even though we “know” that under the hood the sentences come from clanky computerized mechanisms, a twenty-first century version of the Mechanical Turk. ChatGPT’s language displays only the utterance of a “stochastic parrot,” to use Emily Bender’s label. Yet, despite knowing the absence of a GPT’ed self or computerized consciousness, we can’t help but be amazed or even a tad threatened when an amiable ChatGPT, Gemini, or other chatbot responds to our “prompt” with (mostly) clear prose. We might even fantasize that there’s a person in there, somewhere.

Boyle’s new book, The Line: AI and the Future of Personhood (The MIT Press, 2024) forecasts contours of arguments, both legal and moral, that are likely to trace new boundaries of personhood. “There is a line,” he writes in his introduction. “It is a line that separates persons—entities with moral and legal rights—from nonpersons, things, animals, machines—stuff we can buy, sell, or destroy. In moral and legal terms, it is the line between subject and object.”

The line, Boyle claims, will be redrawn. Freshly, probably incessantly, argued. Messily plotted and retraced. Read more »

Monday, October 28, 2024

What Would An AI Treaty Between Countries Look Like?

by Ashutosh Jogalekar

A stamp commemorating the Atoms for Peace program inaugurated by President Dwight Eisenhower. An AI For Peace program awaits (Image credit: International Peace Institute)

The visionary physicist and statesman Niels Bohr once succinctly distilled the essence of science as “the gradual removal of prejudices”. Among these prejudices, few are more prominent than the belief that nation-states can strengthen their security by keeping critical, futuristic technology secret. This belief was dispelled quickly in the Cold War, as nine nuclear states with competent scientists and engineers and adequate resources acquired nuclear weapons, leading to the nuclear proliferation that Bohr, Robert Oppenheimer, Leo Szilard and other far-seeing scientists had warned political leaders would ensue if the United States and other countries insisted on security through secrecy. Secrecy, instead of keeping destructive nuclear technology confined, had instead led to mutual distrust and an arms race that, octopus-like, had enveloped the globe in a suicide belt of bombs which at its peak numbered almost sixty thousand.

But if not secrecy, then how would countries achieve the security they craved? The answer, as it counterintuitively turned out, was by making the world a more open place, by allowing inspections and crafting treaties that reduced the threat of nuclear war. Through hard-won wisdom and sustained action, politicians, military personnel and ordinary citizens and activists realized that the way to safety and security was through mutual conversation and cooperation. That international cooperation, most notably between the United States and the Soviet Union, achieved the extraordinary reduction of the global nuclear stockpile from tens of thousands to about twelve thousand, with the United States and Russia still accounting for more than ninety percent.

A similar potential future of promise on one hand and destruction on the other awaits us through the recent development of another groundbreaking technology: artificial intelligence. Since 2022, AI has shown striking progress, especially through the development of large language models (LLMs) which have demonstrated the ability to distill large volumes of knowledge and reasoning and interact in natural language. Accompanied by their reliance on mountains of computing power, these and other AI models are posing serious questions about the possibility of disrupting entire industries, from scientific research to the creative arts. More troubling is the breathless interest from governments across the world in harnessing AI for military applications, from smarter drone targeting to improved surveillance to better military hardware supply chain optimization. 

Commentators fear that massive interest in AI from the Chinese and American governments in particular, shored up by unprecedented defense budgets and geopolitical gamesmanship, could lead to a new AI arms race akin to the nuclear arms race. Like the nuclear arms race, the AI arms race would involve the steady escalation of each country’s AI capabilities for offense and defense until the world reaches an unstable quasi-equilibrium that would enable each country to erode or take out critical parts of their adversary’s infrastructure and risk their own. Read more »

Sunday, October 20, 2024

Forget Turing, it’s the Tolkien test for AI that matters

by John Hartley

With CAPTHCHA the latest stronghold to be breeched, following the heralded sacking of Turing’s temple, I propose a new standard for AI: The Tolkien test.

In this proposed schema, AI capability would be tested against what Andrew Pinsent terms ‘the puzzle of useless creation’. Pinsent, a leading authority on science and religion asks, concerning Tolkien: “What is the justification for spending so much time creating an entire family of imaginary languages for imaginary peoples in an imaginary world?”

Tolkien’s view of sub-creation framed human creativity as an act of co-creation with God. Just as the divine imagination shaped the world, so too does human imagination—though on a lesser scale—shape its own worlds. This, for Tolkien, was not mere artistic play but a serious, borderline sacred act. Tolkien’s works, Middle-earth in particular, were not an escape from reality, but a way of penetrating reality in the most acute sense.

For Tolkien, fantasia illuminated reality insofar is it tapped into the metaphysical core of things. The the artistic creation predicated on the creative imagination opened the individual to an alternate mode of knowledge, deeply intuitive and discursive in nature. Tolkien saw this creative act as deeply rational, not a fanciful indulgence. Echoing the Thomist tradition, he viewed fantasy as a way of refashioning the world that the divine had made, for only through the imagination is the human mind capable of reaching beyond itself.

The role of the creative imagination, then, is not to offer a mere replication of life but to transcend it. Here is the major test for AI, for in doing so, it accesses what Tolkien called the “real world”—the world beneath the surface of things. As faith seeks enchantment, so too does art seek a kind of conversion of the imagination, guiding it towards the consolation of eternal memory, what Plato termed ‘anamnesis’. Read more »

Monday, April 22, 2024

The Irises Are Blooming Early This Year

by William Benzon

I live in Hoboken, New Jersey, across the Hudson River from Midtown Manhattan. I have been photographing the irises in the Eleventh Street flower beds since 2011. So far I have uploaded 558 of those photos to Flickr.

I took most of those photos in May or June. But there is one from April 30, 2021, and three from April 29, 2022. I took the following photograph on Monday, April 15, 2024 at 4:54 PM (digital cameras can record the date and time an image was taken). Why so early in April? Random variation in the weather I suppose.

Irises on the street in Hoboken.

That particular photo is an example of what I like to call the “urban pastoral,” I term I once heard applied to Hart Crane’s The Bridge.

Most of my iris photographs, however, do not include enough context to justify that label. They are just photographs of irises. I took this one on Friday, April 19, 2024 at 3:23 PM. Read more »

Monday, May 29, 2023

Responsibility Gaps: A Red Herring?

by Fabio Tollon

What should we do in cases where increasingly sophisticated and potentially autonomous AI-systems perform ‘actions’ that, under normal circumstances, would warrant the ascription of moral responsibility? That is, who (or what) is responsible when, for example, a self-driving car harms a pedestrian? An intuitive answer might be: Well, it is of course the company who created the car who should be held responsible! They built the car, trained the AI-system, and deployed it.

However, this answer is a bit hasty. The worry here is that the autonomous nature of certain AI-systems means that it would be unfair, unjust, or inappropriate to hold the company or any individual engineers or software developers responsible. To go back to the example of the self-driving car; it may be the case that due to the car’s ability to act outside of the control of the original developers, their responsibility would be ‘cancelled’, and it would be inappropriate to hold them responsible.

Moreover, it may be the case that the machine in question is not sufficiently autonomous or agential for it to be responsible itself. This is certainly true of all currently existing AI-systems and may be true far into the future. Thus, we have the emergence of a ‘responsibility gap’: Neither the machine nor the humans who developed it are responsible for some outcome.

In this article I want to offer some brief reflections on the ‘problem’ of responsibility gaps. Read more »

Monday, April 17, 2023

Building a Dyson sphere using ChatGPT

by Ashutosh Jogalekar

Artist’s rendering of a Dyson sphere (Image credit)

In 1960, physicist Freeman Dyson published a paper in the journal Science describing how a technologically advanced civilization would make its presence known. Dyson’s assumption was that whether an advanced civilization signals its intelligence or hides it from us, it would not be able to hide the one thing that’s essential for any civilization to grow – energy. Advanced civilizations would likely try to capture all the energy of their star to grow.

For doing this, borrowing an idea from Olaf Stapledon, Dyson imagined the civilization taking apart a number of the planets and other material in their solar system to build a shell of material that would fully enclose their planet, thus capturing far more of the heat than what they could otherwise. This energy-capturing sphere would radiate its enormous waste heat out in the infrared spectrum. So one way to find out alien civilizations would be to look for signatures of this infrared radiation in space. Since then these giant spheres – later sometimes imagined as distributed panels rather than single continuous shells – that can be constructed by advanced civilizations to capture their star’s energy have become known as Dyson spheres. They have been featured in science fiction books and TV shows including Star Trek.

I asked AI engine chatGPT to build me a hypothetical 2 meter thick Dyson sphere at a distance of 2 AU (~300 million kilometers). I wanted to see how efficiently chatGPT harnesses information from the internet to give me specifics and how well its large language model (LLM) of computation understood what I was saying. Read more »

Monday, November 14, 2022

Hyperintelligence: Art, AI, and the Limits of Cognition

by Jochen Szangolies

Deep Blue, at the Computer History Museum in California. Image Credit: James the photographer, CC BY 2.0, via Wikimedia Commons

On May 11, 1997, chess computer Deep Blue dealt then-world chess champion Garry Kasparov a decisive defeat, marking the first time a computer system was able to defeat the top human chess player in a tournament setting. Shortly afterwards, AI chess superiority firmly established, humanity abandoned the game of chess as having now become pointless. Nowadays, with chess engines on regular home PCs easily outsmarting the best humans to ever play the game, chess has become relegated to a mere historical curiosity and obscure benchmark for computational supremacy over feeble human minds.

Except, of course, that’s not what happened. Human interest in chess has not appreciably waned, despite having had to cede the top spot to silicon-based number-crunchers (and the alleged introduction of novel backdoors to cheating). This echoes a pattern well visible throughout the history of technological development: faster modes of transportation—by car, or even on horseback—have not eliminated human competitive racing; great cranes effortlessly raising tonnes of weight does not keep us from competitively lifting mere hundreds of kilos; the invention of photography has not kept humans from drawing realistic likenesses.

Why, then, worry about AI art? What we value, it seems, is not performance as such, but specifically human performance. We are interested in humans racing or playing each other, even in the face of superior non-human agencies. Should we not expect the same pattern to continue: AI creates art equal to or exceeding that of its human progenitors, to nobody’s great interest? Read more »

Monday, August 1, 2022

Acting Machines

by Fabio Tollon

Fritzchens Fritz / Better Images of AI / GPU shot etched 1 / CC-BY 4.0

Machines can do lots of things. Robotic arms can help make our cars, autonomous cars can drive us around, and robotic vacuums can clean our floors. In all of these cases it seems natural to think that these machines are doing something. Of course, a ‘doing’ is a kind of happening: when something is done, usually something happens, namely, an event. Brushing my teeth, going for a walk, and turning on the light are all things that I do, and when I do them, something happens (events). We might think the same thing about robotic arms, autonomous vehicles, and robotic vacuum cleaners. All these systems seem to be doing something, which then leads to an event occurring.  However, in the case of humans, we often think of what we do in terms of agency: when we do perform an action things are not just happening (in a passive sense). Rather, we are acting, we are exercising our agency, we are agents. Can machines be agents? Is there something like artificial agency? Well, as with most things in philosophy, it depends.

Agency, in its human form, is usually about our mental states. It therefore seems natural to think that in order for something or other to be an agent, it should at least in principle have something like mental states (in the form of, for example, beliefs and desires). More than this, in order for an action to be properly attributable to an agent we might insist that the action they perform be caused by their mental states. Thus, we might say that for an entity to be considered an agent it should be possible to explain their behaviour by referring to their mental states. Read more »

Monday, July 25, 2022

Clever Cogs: Ants, AI, And The Slippery Idea Of Intelligence

by Jochen Szangolies

Figure 1: The Porphyrian Tree. Detail of a fresco at the Kloster Schussenried. Image credit: modified from Franz Georg Hermann, Public domain, via Wikimedia Commons.

The arbor porphyriana is a scholastic system of classification in which each individual or species is categorized by means of a sequence of differentiations, going from the most general to the specific. Based on the categories of Aristotle, it was introduced by the 3rd century CE logician Porphyry, and a huge influence on the development of medieval scholastic logic. Using its system of differentiae, humans may be classified as ‘substance, corporeal, living, sentient, rational’. Here, the lattermost term is the most specific—the most characteristic of the species. Therefore, rationality—intelligence—is the mark of the human.

However, when we encounter ‘intelligence’ in the news, these days, chances are that it is used not as a quintessentially human quality, but in the context of computation—reporting on the latest spectacle of artificial intelligence, with GPT-3 writing scholarly articles about itself or DALL·E 2 producing close-to-realistic images from verbal descriptions. While this sort of headline has become familiar, lately, a new word has risen in prominence at the top of articles in the relevant publications: the otherwise innocuous modifier ‘general’. Gato, a model developed by DeepMind, we’re told is a ‘generalist’ agent, capable of performing more than 600 distinct tasks. Indeed, according to Nando de Freitas, team lead at DeepMind, ‘the game is over’, with merely the question of scale separating current models from truly general intelligence.

There are several interrelated issues emerging from this trend. A minor one is the devaluation of intelligence as the mark of the human: just as Diogenes’ plucked chicken deflates Plato’s ‘featherless biped’, tomorrow’s AI models might force us to rethink our self-image as ‘rational animals’. But then, arguably, Twitter already accomplishes that.

Slightly more worrying is a cognitive bias in which we take the lower branches of Porphyry’s tree to entail the higher ones. Read more »

Monday, December 20, 2021

Does AI Need Free Will to be held Responsible?

by Fabio Tollon

We have always been a technological species. From the use of basic tools to advanced new forms of social media, we are creatures who do not just live in the world but actively seek to change it. However, we now live in a time where many believe that modern technology, especially advances driven by artificial intelligence (AI), will come to challenge our responsibility practices. Digital nudges can remind you of your mother’s birthday, ToneCheck can make sure you only write nice emails to your boss, and your smart fridge can tell you when you’ve run out of milk. The point is that our lives have always been enmeshed with technology, but our current predicament seems categorically different from anything that has come before. The technologies at our disposal today are not merely tools to various ends, but rather come to bear on our characters by importantly influencing many of our morally laden decisions and actions.

One way in which this might happen is when sufficiently autonomous technology “acts” in such a way as to challenge our usual practices of ascribing responsibility. When an AI system performs an action that results in some event that has moral significance (and where we would normally deem it appropriate to attribute moral responsibility to human agents) it seems natural that people would still have emotional responses in these situations. This is especially true if the AI is perceived as having agential characteristics. If a self-driving car harms a human being, it would be quite natural for bystanders to feel anger at the cause of the harm. However, it seems incoherent to feel angry at a chunk of metal, no matter how autonomous it might be.

Thus, we seem to have two questions here: the first is whether our responses are fitting, given the situation. The second is an empirical question of whether in fact people will behave in this way when confronted with such autonomous systems. Naturally, as a philosopher, I will try not to speculate too much with respect to the second question, and thus what I say here is mostly concerned with the first. Read more »

Monday, August 30, 2021

Irrationality, Artificial Intelligence, and the Climate Crisis

by Fabio Tollon

Human beings are rather silly creatures. Some of us cheer billionaires into space while our planet burns. Some of us think vaccines cause autism, that the earth is flat, that anthropogenic climate change is not real, that COVID-19 is a hoax, and that diamonds have intrinsic value. Many of us believe things that are not fully justified, and we continue to believe these things even in the face of new evidence that goes against our position. This is to say, many people are woefully irrational. However, what makes this state of affairs perhaps even more depressing is that even if you think you are a reasonably well-informed person, you are still far from being fully rational. Decades of research in social psychology and behavioural economics has shown that not only are we horrific decision makers, we are also consistently horrific. This makes sense: we all have fairly similar ‘hardware’ (in the form of brains, guts, and butts) and thus it follows that there would be widely shared inconsistencies in our reasoning abilities.

This is all to say, in a very roundabout way, we get things wrong. We elect the wrong leaders, we believe the wrong theories, and we act in the wrong ways. All of this becomes especially disastrous in the case of climate change. But what if there was a way to escape this tragic epistemic situation? What if, with the use of an AI-powered surveillance state, we could simply make it impossible for us to do the ‘wrong’ things? As Ivan Karamazov notes in the tale of The Grand Inquisitor (in The Brothers Karamzov by Dostoevsky), the Catholic Church should be praised because it has “vanquished freedom… to make men happy”. By doing so it has “satisfied the universal and everlasting craving of humanity – to find someone to worship”. Human beings are incapable of managing their own freedom. We crave someone else to tell us what to do, and, so the argument goes, it would be in our best interest to have an authority (such as the Catholic Church, as in the original story) with absolute power ruling over us. This, however, contrasts sharply with liberal-democratic norms. My goal is to show that we can address the issues raised by climate change without reinventing the liberal-democratic wheel. That is, we can avoid the kind of authoritarianism dreamed up by Ivan Karamazov. Read more »

Monday, July 5, 2021

How Can We Be Responsible For the Future of AI?

by Fabio Tollon 

Are we responsible for the future? In some very basic sense of responsibility we are: what we do now will have a causal effect on things that happen later. However, such causal responsibility is not always enough to establish whether or not we have certain obligations towards the future.  Be that as it may, there are still instances where we do have such obligations. For example, our failure to adequately address the causes of climate change (us) will ultimately lead to future generations having to suffer. An important question to consider is whether we ought to bear some moral responsibility for future states of affairs (known as forward-looking, or prospective, responsibility). In the case of climate change, it does seem as though we have a moral obligation to do something, and that should we fail, we are on the hook. One significant reason for this is that we can foresee that our actions (or inactions) now will lead to certain desirable or undesirable consequences. When we try and apply this way of thinking about prospective responsibility to AI, however, we might run into some trouble.

AI-driven systems are often by their very nature unpredictable, meaning that engineers and designers cannot reliably foresee what might occur once the system is deployed. Consider the case of machine learning systems which discover novel correlations in data. In such cases, the programmers cannot predict what results the system will spit out. The entire purpose of using the system is so that it can uncover correlations that are in some cases impossible to see with only human cognitive powers. Thus, the threat seems to come from the fact that we lack a reliable way to anticipate the consequences of AI, which perhaps make us being responsible for it, in a forward-looking sense, impossible.

Essentially, the innovative and experimental nature of AI research and development may undermine the relevant control required for reasonable ascriptions of forward-looking responsibility. However, as I hope to show, when we reflect on technological assessment more generally, we may come to see that just because we cannot predict future consequences does not necessary mean there is a “gap” in forward looking obligation. Read more »

Monday, June 14, 2021

The ethics of regulating AI: When too much may be bad

by Ashutosh Jogalekar

Areopagitica‘ was a famous speech delivered by the poet John Milton in the English Parliament in 1644, arguing for the unlicensed printing of books. It is one of the most famous speeches in favor of freedom of expression. Milton was arguing against a parliamentary ordinance requiring authors to get a license for their works before they could be published. Delivered during the height of the English Civil War, Milton was well aware of the power of words to inspire as well as incite. He said,

For books are not absolutely dead things, but do preserve as in a vial the purest efficacy and extraction of that living intellect that bred them. I know they are as lively, and as vigorously productive, as those fabulous Dragon’s teeth; and being sown up and down, may chance to spring up armed men…

What Milton was saying is not that books and words can never incite, but that it would be folly to restrict or ban them before they have been published. This appeal toward withholding restraint before publication found its way into the United States Constitution and has been a pillar of freedom of expression and the press since.

Why was Milton opposed to pre-publication restrictions on books? Not just because he realized that it was a matter of personal liberty, but because he realized that restricting a book’s contents means restricting the very power of the human mind to come up with new ideas. He powerfully reminded Parliament,

Who kills a man kills a reasonable creature, God’s image; but he who destroys a good book, kills reason itself, kills the image of God, as it were, in the eye. Many a man lives a burden to the earth; but a good book is the precious lifeblood of a master spirit, embalmed and treasured up on purpose to a life beyond life.

Milton saw quite clearly that the problem with limiting publication is in significant part a problem with trying to figure out all the places a book can go. The same problem arises with science. Read more »

Monday, November 16, 2020

The Lobster and the Octopus: Thinking, Rigid and Fluid

by Jochen Szangolies

Fig. 1: The lobster exhibiting its signature move, grasping and cracking the shell of a mussel. Still taken from this video.

Consider the lobster. Rigidly separated from the environment by its shell, the lobster’s world is cleanly divided into ‘self’ and ‘other’, ‘subject’ and ‘object’. One may suspect that it can’t help but conceive of itself as separated from the world, looking at it through its bulbous eyes, probing it with antennae. The outside world impinges on its carapace, like waves breaking against the shore, leaving it to experience only the echo within.

Its signature move is grasping. With its pincers, it is perfectly equipped to take hold of the objects of the world, engage with them, manipulate them, take them apart. Hence, the world must appear to it as a series of discrete, well-separated individual elements—among which is that special object, its body, housing the nuclear ‘I’ within. The lobster embodies the primal scientific impulse of cracking open the world to see what it is made of, that has found its greatest expression in modern-day particle colliders. Consequently, its thought (we may imagine) must be supremely analytical—analysis in the original sense being nothing but the resolution of complex entities into simple constituents.

The lobster, then, is the epitome of the Cartesian, detached, rational self: an island of subjectivity among the waves, engaging with the outside by means of grasping, manipulating, taking apart—analyzing, and perhaps synthesizing the analyzed into new concepts, new creations. It is forever separated from the things themselves, only subject to their effects as they intrude upon its unyielding boundary. Read more »

Monday, July 20, 2020

An Electric Conversation with Hollis Robbins on the Black Sonnet Tradition, Progress, and AI, with Guest Appearances by Marcus Christian and GPT-3

by Bill Benzon

I was hanging out on Twitter the other day, discussing my previous 3QD piece (about Progress Studies) with Hollis Robbins, Dean of Arts and Humanities at Cal State at Sonoma. We were breezing along at 240 characters per message unit when, Wham! right out of the blue the inspiration hit me: How about an interview?

Thus I have the pleasure of bringing another Johns Hopkins graduate into orbit around 3QD. Hollis graduated in ’83; Michael Liss, right about the corner, in ’77; and Abbas Raza, our editor, in ’85; I’m class of  ’69. Both of us studied with and were influenced by the late Dick Macksey, a humanist polymath at Hopkins with a fabulous rare book collection. I know Michael took a course with Macksey and Abbas, alas, he missed out, but he met Hugh Kenner, who was his girlfriend’s advisor.

Robbins has also been Director of the Africana Studies program at Hopkins and chaired the Department of Humanities at the Peabody Institute. Peabody was an independent school when I took trumpet lessons from Harold Rehrig back in the early 1970s. It started dating Hopkins in 1978 and they got hitched in 1985.

And – you see – another connection. Robbins’ father played trumpet in the jazz band at Rensselaer Polytechnic Institute in the 1950s. A quarter of a century later I was on the faculty there and ventured into the jazz band, which was student run.

It’s fate I call it, destiny, kismet. [Social networks, fool!]

Robbins has published this and that all over the place, including her own poetry, and she’s worked with Henry Louis “Skip” Gates, Jr. to give us The Annotated Uncle Tom’s Cabin (2006). Not only was Uncle Tom’s Cabin a best seller in its day (mid-19th century), but an enormous swath of popular culture rests on its foundations. If you haven’t yet done so, read it.

She’s here to talk about her most recent book, just out: Forms of Contention: Influence and the African American Sonnet Tradition. Read more »