The Problem Optimists and Pessimists Can Have in Common

by Ken MacVey

As a lawyer I know too well that lawyers are infamous for looking for the dark lining in a silver cloud. That outlook goes with the territory of trying to look for legal pitfalls and hidden trap doors. That’s part of the job of what lawyers do—trying to protect their clients from legal liability and unexpected detours and disasters that could have been avoided by careful drafting or strategizing. That doesn’t mean lawyers are pessimists but sometimes it is taken that way.

This takes me to the glass half-empty/half-full trope. I have a different take on that trope. I think with a little reframing it tells a different story, illustrating a problem optimists and pessimists can share, and what to do about this problem. Here is the reframing:

There is a glass of water filled halfway to the middle.

The pessimist looks at the glass and says it is half empty. The pessimist goes on to say this is not enough, it won’t get any better, it might get worse with evaporation, maybe the water is contaminated, and we can’t do anything about it.

The result: nothing gets done. The glass stays filled halfway to the middle.

The optimist looks at the glass and says it is half full. The optimist goes on to say everything is good, we should count our blessings for having this nice crystal-clear water, everything is going to be great especially when we’re thirsty, there is no need to do anything, everything will take care of itself.

The result: nothing gets done. The glass stays filled halfway to the middle.

The activist looks at the glass and says: Fill it up!

The result: the glass gets filled up.

You see the problem that optimists and pessimists can share is that they both may rationalize not doing anything when something could get done. Read more »

Wednesday, March 19, 2025

Imagining, for Grown-Ups: On Maintenance

by Lei Wang

I have often been envious of how characters in stories don’t seem to need to do dishes or laundry or buy groceries, except when it serves their story, like a meet-cute at the farmer’s market or perhaps a juicy conflict between two in-laws over the most efficient way to load the dishwasher. Otherwise, in novels and TV but especially in short stories and movies, the refrigerator fills itself and even eating is an afterthought: food is for pleasure, not necessity.

The boring things of life are given the ax, or no one would watch; imagine a maximalist reality show, each episode 24 hours long, corresponding exactly to a day in the life of someone that you play alongside your own life, minute by minute. Even if it were your favorite celebrity, would you really want to accompany them as they sleep for seven hours? I suppose there are such dedicated viewers out there and also such dedicated livestreams, like Firefox’s red panda web cams. I remember years ago coming across a crowdfunding campaign by a European twenty-something who decided to reduce his carbon footprint by sleeping or otherwise staying in his room all day. He was asking for money in order to do nothing, to contribute as little to the world as possible, and prove it via the most boring livestream. If I remember correctly, he had quite a few patrons—if only for the novelty of the idea.

What are the boring bits of life? Sleep, except for dreams. Chores. The things we have to do, and the things we do again and again and again. Life seems to be a constant battle against entropy, and we are losing. “I don’t identify as transgender… I identify as tired,” said Hannah Gadbsy in the comedy special Nanette. Don’t we all. This was the true punishment of Sisyphus: not the moving of the boulder or even the futility of it, but the day-in, day-outness of it all. We just showered yesterday and our hair is greasy already. The kitchen sink was empty a moment ago, but now there are no forks. The dog needs to be walked, again. Why can’t there be a pet that truly eats one’s garbage?

In The Quotidian Mysteries, a book on the mystical aspects of laundry and other domestic tasks, Kathleen Norris writes of how she found her way back to Catholicism through an Irish-American wedding in which, after the ceremony was over, she watched the priest doing dishes. “In that big, fancy church, after all of the dress-up and the formalities of the wedding mass, homage was being paid to the lowly truth that we human beings must wash the dishes after we eat and drink,” she wrote. “The chalice, which had held the very blood of Christ, was no exception. And I found it enormously comforting to see the priest as a kind of daft housewife, overdressed for the kitchen, in bulky robes, puttering about the altar, washing up after having served so great a meal to so many people.” She couldn’t quite understand the service, but she could understand eating, drinking, and housework.

Sacred is something that is “set apart” from the ordinary; something is sacred because it is not meant to be ordinary. But to treat an ordinary task as extraordinary is also to stand out from the ordinary. Read more »

From Karachi with Ink

by Claire Chambers

One day I went to my workplace in York, northern England, where I checked my pigeonhole as usual. An airmail letter lay in the metal box. Its postmarks were from Pakistan, and a man’s name and a Karachi address were scrawled on the back of the envelope. Because of the bimonthly columns I write for Dawn newspaper, I sometimes get email feedback. Occasionally people send me their books for review. But this slim package surprised me – especially when I broke the seal and pulled out four pages of Urdu handwriting.

It had been years, probably over a decade, since I received a personal letter. My teenage sons and twenty-something Urdu teacher, Fareeha, claim they’ve never had one, apart from official dispatches. In this digital dunyā, what a privilege it is to get a letter, and that too a piece of mail which had travelled a long way.

This reaching out across distance and cultures reminded me of Saadat Hasan Manto’s ‘Letters to Uncle Sam’ – sharp, satirical notes dissecting international politics and American power. Balram’s letters to the Chinese Premier in The White Tiger also came to mind, with Aravind Adiga’s character telling Wen Jiabao about Indian corruption and inequality. My situation was different – I was in the UK, not the US or China – and the letter writers tone proved to be sincere rather than arch or ironic. Yet the impulse to communicate across borders felt equally urgent. In honour of Manto’s ‘Letters’ and to protect my correspondent’s anonymity, in this blog post I will call him Sami. I surmise, though, that he is much closer to my nephew’s age than my uncle’s.

I gazed at the pages, drawn in by the Urdu script and feeling enchanted that someone had created these words with such dexterity. I exulted even more at the newfound reading skills which allowed me to decode bits and pieces.

I wasn’t linguistically equipped to easily decipher the whole letter. The first few lines were straightforward enough. His postal address was repeated, this time in Nastaliq. However, tellingly for this bibliophile, he did not provide any email or phone details. A standard polite greeting was given, and Sami went on to introduce himself. He explained he was a food science graduate but that his گھر کا ماحول – ghar kā maḥol or home environment – had instilled in him a love of literature. Here he name-checked two authors. The first was a writer whose detective novels I had been harbouring an ambition of reading in the original Urdu: Ibn-e-Safi. The other was new to me but my teacher Fareeha later told me he is brilliant: Ishtiaq Ahmed, who wrote spy novels as well as crime fiction. Sami had devoured all the thrillers by these novelists, both of whom are no longer alive. After these childhood peregrinations, he told me, his reading expedition had continued uninterrupted. Read more »

Tuesday, March 18, 2025

How to Avoid the Eugenics Wars: Principles for Enhancement Alignment

by Kyle Munkittrick

Gemini’s “60s Psychedelic Poster of The Culture”

We’re on the cusp of The Culture. This Iain M. Banks series has recently replaced Star Trek as the lodestar for The Future We Want. Why? Because it shows us how good the future can be with AI. Critically, both fictions also offer key lessons about the threat and promise of human enhancement: it is coming, it could be amazing, but first it will be contentious, and if we’re not ready, we’ll suffer its worst harms and get few of its best benefits. We want The Culture and to get there, we need to take Star Trek seriously.

In his excellent essay for Arena Dean Ball asked, in essence, “Where are the clearly articulated benefits of a world with AI? Why is it worth all this risk?” The answer to Ball is, “Go read The Culture. Start with The Player of Games.” Imagining a peaceful, prosperous, and pluralistic post-scarcity utopia that is that way because it is run by benevolent ASI (called ‘Minds’) is difficult. Imagining a world where AI has so ‘solved’ biological and medical science so completely that its citizens are fundamentally post-human is even more so. The vision of The Culture is so grand and so alien that it takes a series of novels, following the daily lives of the protagonists, to begin to grasp just how incredible the future with AI could be.

In Star Trek, however, though we can reach the stars, medical technology seems not much advanced beyond that of the 20th century. It’s not due to a limitation of science, but of society. Before reaching the stars, humans endured the Eugenics Wars—a global conflict arising from first the reckless pursuit of, then catastrophic backlash to and banning of, human enhancement technologies. Think, ‘The Butlerian Jihad, but for biology.’

Both pieces of fiction are important because of what is very likely about to happen. In his 15,000-word manifesto Machines of Loving Grace, Anthropic CEO Dario Amodei bolds one, and only one, paragraph:

“[M]y basic prediction is that AI-enabled biology and medicine will allow us to compress the progress that human biologists would have achieved over the next 50-100 years into 5-10 years. I’ll refer to this as the “compressed 21st century”: the idea that after powerful AI is developed, we will in a few years make all the progress in biology and medicine that we would have made in the whole 21st century.”

Let’s call this ~60% year-over-year acceleration of progress the “BOOM (Biological Orders of Magnitude) Decade” (2025-2035). To ‘feel the BOOM’, imagine going from discovering antibiotics (~1930) to all the medical technology we have today (IVF, MRI, GLP-1s) by 1940. People would freak out, to put it mildly. Society needs frameworks and mental models to be able to absorb and adjust to change at that speed. Just as AI alignment principles guided AI development, we urgently need enhancement alignment principles to guide the coming biological revolution. Without AI alignment, we risked creating Skynet or the Paperclip Optimizer; without enhancement alignment, we risk the Eugenics Wars—either through reckless implementation or through panicked prohibition that prevents beneficial technologies.

In preparation for the BOOM, I propose a set of principles for human enhancement technologies (HETs) as a starting point for the conversation and as fodder for consideration by both the humans and the AI who will be building these biological and medical technologies. Read more »

Close Reading Edna St. Vincent Millay

by Ed Simon

Impossible to know which one of those perennial evergreen subjects – love or death – poetry considers more, but certainly verse can be particularly charged when it combines those two. Love and death, the only topics worthy of serious contemplation, where anything else worth orienting the mind towards is merely an amalgamation of that pair. Maybe that seems counterintuitive, or worse still as mere sophistry, to claim that love and death are ineffable in the manner of God, for after all there are clear definitions of love and death, and furthermore everyone has an experience of them. But it’s their universality that makes them ineffable, because both are defined by paradox. Death, after all, is the one commonality to all of life, the only thing that absolutely every person will experience, but also that which nobody currently alive can say anything definitive about. A paradox, death. Love, though sadly not as universal as death, would seem to be less paradoxical, and yet genuine love is marked by a desire for personal extinction (not unlike death), a submerging of the self into the being of another. An arithmetic not of addition, but of multiplication. Of one and one equaling one.

In American modernist Edna St. Vincent Millay’s effecting “Dirge Without Music,” a free verse four-quatrain poem written in an alternate rhyme scheme that evokes a ballad first published in 1928, death is read in light of love in a manner that provides a glimpse of comprehension as regards those things which are ineffable. Her poem explores the tensions in love and death, not least of all in the title of the poem which is a paradox. A dirge, by definition, is composed of music, so that to have a dirge without music is nonsensical, like a sculpture without shape or a story without narrative. Yet that’s also precisely what death is, an experience of life – perhaps the sine qua non of life which gives it meaning – but also something that can’t be experienced in life since it marks the termination of existence. That particular aspect of death has long been remarked upon, a favored argument of the Stoics and Epicureans in ancient Greece meant as a comfort regarding the fear of extinction. Such an argument maintains that if eternity follows death than the later isn’t really death, and if death is marked by the obliteration of the self than we never really experience it, since experience requires a self. All well and good in terms of the logic, but not quite adequate to the phenomenological question of what death feels like, for what does it mean to experience something defined by an inability to feel (as if listening to a dirge without music)? Read more »

Monday, March 17, 2025

Fine Tuning Against the Multiverse?

by Tim Sommers

In “Calculating God,” Robert J. Sawyer’s first-contact novel, the aliens who arrive on Earth believe in the existence of God – without being particularly religious. Why?

There are certain physical forces, they explain, that make life in our universe possible only if they are tuned to very specific values. Which they are. We are here, after all. But there’s no physical reason that the values need to be set the way they are. The aliens have concluded that someone, or something, set the values of these parameters at the beginning of the universe to insure that life would come into existence. That something they call God.

Here’s a much earlier, very different version of this argument. If you were hiking through the woods and you picked up a shiny object that turned out to be a small stone, it would probably not occur to you that it might have been made by someone. If it turned out to be a watch, however, you would immediately conclude that it had been intentionally created. So, is the universe more like a stone or a watch?

This argument from design was an especially powerful argument for the existence of God when very little was known about biology. The complexity of living things puts watches to shame. But then Darwin came along and used evolution to explain how such diversity, complexity, and apparent design could come about without a designer.

Just when the argument that the complexity of our world could only be explained by God seemed lost, a new, purely physical reason to think that the universe was designed appeared. The one the aliens embrace.

“A striking phenomenon uncovered by contemporary physics,” Kenneth Boyce and Philip Swenson write in their forthcoming paper “The Fine-Tuning Argument Against the Multiverse,” (Philosophy Quarterly) is that many of the fundamental constants of nature appear to have arbitrary values that happen to fall within extremely narrow life-permitting windows.” Read more »

Why even a “superhuman AI” won’t destroy humanity

by Ashutosh Jogalekar

Photo credit

AGI is in the air. Some think it’s right around the corner. Others think it will take a few more decades. Almost all who talk abut it agree that it applies to a superhuman AI which embodies all the unique qualities of human beings, only multiplied a thousand fold. Who are the most interesting people writing and thinking about AGI whose words we should heed? Although leaders of companies like OpenAI and Anthropic suck up airtime, I would put much more currency in the words of Kevin Kelly, a superb philosopher of technology who has been writing about AI and related topics for decades; among his other accomplishments, he is the founding editor of Wired magazine, and his book “Out of Control” was one of the key inspirations for the Matrix movies. A few years ago he wrote a very insightful piece in Wired about four reasons why he believes fears of an AI that will “take over humanity” are overblown. He casts these reasons in the form of misconceptions about AI which he then proceeds to question and dismantle. The whole thing should be dusted off and is eminently worth reading.

The first and second misconceptions: Intelligence is a single dimension and is “general purpose”.

This is a central point that often gets completely lost when people talk about AI. Most applications of machine intelligence that we have so far are very specific, but when AGI proponents hold forth they are talking about some kind of overarching single intelligence that’s good at everything. The media almost always mixes up multiple applications of AI in the same sentence, as in “AI did X, so imagine what it would be like when it could do Y”; lost is the realization that X and Y could refer to very different dimensions of intelligence, or significantly different in any case. As Kelly succinctly puts it, Intelligence is a combinatorial continuum. Multiple nodes, each node a continuum, create complexes of high diversity in high dimensions.” Even humans are not good at optimizing along every single of these dimensions, so it’s unrealistic to imagine that AI will. In other words, intelligence is horizontal, not vertical. The more realistic vision of AI is thus what it already has been; a form of augmented, not artificial, intelligence that helps humans with specific tasks, not some kind of general omniscient God-like entity that’s good at everything. Some tasks that humans do will indeed be replaced by machines, but in the general scheme of things humans and machines will have to work together to solve the tough problems. Read more »

Poem by Jim Culleny

1/1/26 Typhoon USA Aftermath Premonition

—This is a revised version of a poem I wrote when a newly insane river running
through our town one day came
 through, almost over-toping bridges, spilling
into streets with wild abandon during Hurricane Irene. 

One year back the river tore through
on its fall to the sea, courts and laws
slid beneath a steal (sic) bridge on Maga swells,
small shops (especially) pirouetted off their base,
rushed downstream to lodge against
the remains of a constitution compromised
by Magarussian jackhammers of mad winds.

Cellars of The People filled with mendacious mud
and whatever the river had dredged, whatever it had
sucked from cesspools of Oligarchs, whatever it had
ripped from the gardens of the free, of their
summer afternoons, of their homes and barns,
earth and offal left along its banks, in basements,
in streets, in the vacant classrooms of children before
books were banned, tongues of free speech
caught in eddies of choreographed confusion.

A cornfield, tall enough the day before when its
cobbed yield might have grinned yellow from humble
plates until a typhoon of raw, privileged intent laid it low,
its proud ranks of green stalks now laid flat from
sea to shining sea by Potomac’s winnowing rake, all now
lay supine as a woman or man, after a sweet or savage life,
lie still before the sweep of a polluted sea.
.
Jim Culleny
modified, 3/14/25

Enjoying the content on 3QD? Help keep us going by donating now.

Sunday, March 16, 2025

The Rise and Fall of the Mind-Body Problem

by Katalin Balog

The mind-body problem in its current form – an inquiry into how the mind fits into the physical universe – was formulated by René Descartes in the 17th century. In his Meditations, a thin volume of philosophy that had a monumental effect on all later Western philosophy, he famously argued that it is possible to conceive of a mind without extension – a disembodied soul – and a body without thought – a mindless zombie. And since whatever can be clearly and distinctly conceived, can be brought about by God, it is possible for there to be a mind without body and a body without mind. He concludes, based on observations about our concept of mind and body, that they are really distinct. His position is called dualism, which is the view that the world has fundamental ingredients that are not physical. Of course, dualism was not original to Descartes; from Plato to the Doctors of the Church and ordinary folks, most people up until the Enlightenment have advocated it. But his way of arriving at it – examining what he can “clearly and distinctly” conceive – has set the template for all subsequent discussions. Nota bene, he had more empirically based arguments as well; for example, he thought nothing physical could produce something so open ended and creative as human speech. He would have been very surprised by ChatGPT.

According to Descartes, the mind, or soul, is an exalted thing: it is non-spatial, immaterial, immortal, and entirely free in its volition. He also thought – reasonably enough – that it interacts with the body, an extended, spatially located thing. In Descartes’ view, there are sui generis mental causes; purely physical causes cannot explain actions. Descartes held that only the quantity of motion is strictly physically determined, not its directionality. In Leibniz’s telling of the story in The Monadology, Descartes believed that the mind nudges moving particles of matter in the pineal gland, causing them to swerve without losing speed, like the car going around the corner. He had a real, substantive disagreement with his contemporary Hobbes, a materialist who thought that there were only physical causes.

Developments in science have had an enormous impact on this debate. The gist is that advances in the physical and biological sciences ultimately ruled out Descartes’s idea of the mind as a sui generis force – though not necessarily other forms of dualism which we will talk about shortly. Not only is it hard to comprehend Descartes’ idea that a non-material mind can move a material body, but such sui generis non-physical causes are ruled out by what we know about natural processes. Read more »

A Look in the Mirror

MORE LOOPY LOONIES BY ANDREA SCRIMA

For the past ten years, Andrea Scrima has been working on a group of drawings entitled LOOPY LOONIES. The result is a visual vocabulary of splats, speech bubbles, animated letters, and other anthropomorphized figures that take contemporary comic and cartoon images and the violence imbedded in them as their point of departure. Against the backdrop of world political events of the past several years—war, pandemic, the ever-widening divisions in society—the drawings spell out words such as NO (an expression of dissent), EWWW (an expression of disgust), OWWW (an expression of pain), or EEEK (an expression of fear). The morally critical aspects of Scrima’s literary work take a new turn in her art and vice versa: a loss of words is countered first with visual and then with linguistic means. Out of this encounter, a series of texts ensue that explore topics such as the abuse of language, the difference between compassion and empathy, and the nature of moral contempt and disgust. 

Part I of this project can be seen and read HERE

Part II of this project can be seen and read HERE

Images from the exhibition LOOPY LOONIES at Kunsthaus Graz, Austria, can be seen HERE

 
Andrea Scrima, LOOPY LOONIES. Series of drawings 35 x 35 each, graphite on paper; edition of postcards with text excerpts. Exhibition view: Kunsthaus Graz, Austria, June 2024.

7. EEEK

Michel de Montaigne’s famous statement—“The thing I fear most is fear”—remains, nearly five hundred years later, thoroughly modern. We think of fear as an illusion, a mental trap of some kind, and believe that conquering it is essential to our personal well-being. Yet in evolutionary terms, fear is an instinctive response grounded in empirical observation and experience. Like pain, its function is self-preservation: it alerts us to the threat of very real dangers, whether immediate or imminent.

Fear can also be experienced as an indistinct existential malaise, deriving from the knowledge that misfortune inevitably happens, that we will one day die, and that prior to our death we may enter a state so weak and vulnerable that we can no longer ward off pain and misery. We think of this more generalized fear as anxiety: we can’t shake the sense that bad things—the vagueness of which render them all the more frightening—are about to befall us. The world is an inherently insecure and precarious place; according to Thomas Hobbes, “there is no such thing as perpetual Tranquillity of mind, while we live here; because life it selfe is but Motion, and can never be without Desire, nor without Fear” (Leviathan, VI). Day by day, we are confronted with circumstances that justify a response involving some degree of entirely realistic and reasonable dread and apprehension, yet anxiety is classified as a psychological disorder requiring professional therapeutic treatment. Read more »

Friday, March 14, 2025

Salting the Earth and the Vandalism of America

by Mark Harvey

Elon Musk

To tear something down is infinitely easier than building something of benefit or beauty. Constructing an elegant house that will last through the ages, can take years.  From a dream, to design, to approvals, to construction means gobs of money, skilled designers, and dedicated builders. When you see that handsome house perched just so on a hill, with its cedar siding, cased windows, and tidy balconies, know that dozens of men and women labored and strove to get it just right—hundreds of mornings planning, sawing, hammering, painting, plumbing and polishing.

But give me a forty-ton excavator and a couple of dump trucks, and I will demolish that house and clear the site in one day. To a person of evil intent and ill mind, tearing down so much effort in so little time will be a thrill.

That’s what makes vandalism so attractive to people with festering resentments. Destroying something precious to someone else in the dark of the night is the sort of sugar rush that thrills degenerates.

When the richest man in the world takes hammer and tongs to our government and delights in tearing down agencies central to our economy, farms, public health, environment, and foreign policy—when a man-child of his accidental consequence recklessly fires thousands of public employees without knowing the first thing about government, it’s time for anyone who does love this country to stand up and call out a flat NO!

Watching Elon Musk with his strange gothic uniforms of black jackets, t-shirts and ball caps, and reading his inane tweets sprinkled with juvenile humor, brings to mind a deeply insecure adolescent. And yet, that puffed-up adolescent is tearing apart the lives of thousands of Americans directly, and millions of people worldwide as a consequence. Read more »

What Becomes Of The Femboy?

by Mike Bendzela

In a kindergarten classroom in the mid-1960s, a kid named Mikey steered clear of the boys stacking large toy blocks on top of one another and knocking them down again–so obnoxiousand instead went and sat at the table of girls making beads out of salt dough and stringing them together on a thread. These girls were not averse to tasting the salt dough and smacking their lips in disgust. The teacher had wisely settled on salt dough because she knew it wouldn’t poison the students should they eat it. At least the girls were smart and funny and didn’t continually knock each other to the floor.

Mikey preferred these sober, artsy activities–making necklaces of salt dough beads, pressing hand prints into soft clay disks, tracing the profiles of silhouetted heads projected via lamp light onto sheets of construction paper–over the rough-and-tumble of block stacking, fat-ball tossing, and floor hockey, because–well, he just did. Thus developed the central themes of his boyhood–hates sports; likes art and language; hangs out with the girls.

Throughout grade school, gym class gave him a terrible knot in his stomach and he longed to be elsewhere, a disposition cemented into place by an incident during a game of “battle ball,” in which boys stood at opposite walls and hurled large pneumatic balls at each other for God knows what reason, and a ball smacked him square in the face and knocked his glasses off his head.

The glasses allowed him to read the teacher’s flowing, cursive handwriting on the chalkboard, reading which he was good at, and he forever yearned to be allowed to pick up a long piece of chalk and tap-scratch letters onto the board himself. This desire was at long last granted, and soon he was permitted whole boxes of colored chalks to use, and the teacher allowed him and some girl friends to cover the entire chalkboard with decorative chalk drawings. Read more »

Thursday, March 13, 2025

A Tale of Two Doges: An Uncertain History

by Alizah Holstein

Jean LeClerc, Doge Enrico Dandolo recruiting for the crusade
Jean LeClerc, Doge Enrico Dandolo recruiting for the crusade (1621)

When a statement was issued last November stipulating that a new U.S. government department known by the acronym DOGE was to be formed, the medievalist in me snapped to attention. To me, “doge” was a word with distinctly medieval meaning. But hardly anywhere was this meaning being explored in the context of DOGE.

For anyone who has been living in a glacial crevasse for the past few months, DOGE stands for Department of Government Efficiency. Two phenomena are purported to have inspired the DOGE acronym. One: the cryptocurrency Dogecoin. And two: the “doge” internet meme featuring photos of a Japanese Shiba Inu overwritten with pidgin English text that played on a misspelling of the word “dog.” Both were much beloved by Elon Musk, who has to all appearances been heading DOGE.

But there’s a third association worth exploring as we consider the implications of this (unofficial) government department. Stated simply, a doge was the chief magistrate of medieval Italian maritime republics. Venice had doges for over a thousand years, from 700 until 1797 when the Napoleonic Wars brought the republic to its end. Genoa, for a shorter time, had them as well, and Pisa, too, counts a single doge in its historical register. What in Italian is doge in Venetian is doxe, and both derive from dux, the Latin word for leader and a cognate of duke.

The possible association of DOGE with a medieval magistrate has not been widely explored, but I do think it matters. Why? Because medievalism feels peculiarly salient right now in culture and politics alike. Last month at London Fashion Week, models strutted in chain mail and armor, while one carried a decorative sword.[1] Castlecore, which offers “a nostalgic ideal of luxury and wealth,” is trending on social, and romantasy sells.[2] Medievalism has never been very far from the American imagination, but in this moment it feels top of mind. Read more »

Should AI Speak for the Dying?

by Muhammad Aurangzeb Ahmad

Everyone grieves in their own way. For me, it meant sifting through the tangible remnants of my father’s life—everything he had written or signed. I endeavored to collect every fragment of his writing, no matter profound or mundane – be it verses from the Quran or a simple grocery list. I wanted each text to be a reminder that I could revisit in future. Among this cache was the last document he ever signed: a do-not-resuscitate directive. I have often wondered how his wishes might have evolved over the course of his life—especially when he had a heart attack when I was only six years old. Had the decision rested upon us, his children, what path would we have chosen? I do not have definitive answers, but pondering on this dilemma has given me questions that I now have to revisit years later in the form of improving ethical decision making at the end-of-life scenarios. To illustrate, consider Alice, a fifty-year-old woman who had an accident and is incapacitated. The physicians need to decide whether to resuscitate her or not. Ideally there is an advance directive which is a legal document that outlines her preferences for medical care in situations where she is unable to communicate her decisions due to incapacity. Alternatively, there may be a proxy directive which usually designate another person, called a surrogate, to make medical decisions on behalf of the patient.

Given the severity of these questions, would it not be helpful if there was a way to inform or augment decisions with dispassionate agents who could weigh in competing pieces of information without emotions coming in the way? Artificial Intelligence may help or at least provide feedback that could be used as a moral crutch. It also has practical implications as only 20-30% percent of the general American population has some sort of advance directive. The idea behind AI surrogates is that given sufficiently detailed data about a person, an AI can act as a surrogate in case the person is incapacitated, making decisions that reflect what the person would have taken if they were not incapacitated. However, even setting aside the question of what data may be needed, data is not always a perfect reflection of reality. Ideally this data is meant to capture a person’s affordances, preferences, and more preferences, with the assumption that they are implicit in the data. This may not always be true, as people evolve, change their preferences, and update their worldviews. Consider a scenario where an individual provided an advance directive in 2015, yet later converted to Jehovah’s Witness—a faith that disavows medical procedures that involve blood transfusions. Despite this profound shift in beliefs, the existing directive would still reflect past preferences rather than current convictions. This dilemma extends to AI-trained models, often referred to as the problem of stale data. If conversational data from a patient is used to train an AI model, yet the patient’s beliefs evolve over time, data drift ensures that the AI’s knowledge becomes outdated, failing to reflect the individual’s current values and convictions.

Many of the challenges inherent in AI, such as bias, transparency, and explainability, are equally relevant in the development of AI surrogates. Read more »

Wednesday, March 12, 2025

The Limits of American Exceptionalism

by Bill Murray

I.

A hundred years ago two battered and beleaguered old men, one an Italian prisoner, the other taken to wandering Irish bogs, arrived at the same fateful truth: the world around them was collapsing.

Antonio Gramsci, Marxist theorist, imprisoned member of the Italian parliament, wrote from his cell that “the old world is dying, and the new cannot be born; in this interregnum, a great variety of morbid symptoms appear.”

William Butler Yeats, a sort of mystic horrified by violence and uncertainty across his Irish homeland, saw the same future. From a cottage in County Galway he wrote “things fall apart; the center cannot hold.” What was worse, “the best lack all conviction, while the worst are full of passionate intensity.”

One a political theorist dissecting history’s brutal transitions, the other a poet divining the chaos of human nature, both foresaw the same truth—that societies don’t glide one era to the next. Sufficiently stressed they shatter, and from the wreckage, new orders struggle to emerge. A hundred years on, Gramsci and Yeats’s alarmed realizations read less like prophecy than real time commentary.

Today’s Americans grew up confident that they stood outside history’s cycles of rise and fall. Battered but buoyed by victory in World War II (and recognizing an opportunity), the United States built a powerful international system meant to foster global stability and economic growth while, naturally, serving its own interests. And, in a world laid waste by war, so it did.

For the longest time this system forestalled large-scale conflicts, allowing America and its allies to prosper.

That system has played itself out.

It was easy enough for the mighty and victorious United States to stamp its model on a war-exhausted world. Turns out, maintaining that system indefinitely in a restive world is challenging.

Gramsci speaks outside of time straight to this week that the old is dying. As a new alignment struggles to be born, just as he said: witness the morbid symptoms.

American legend still holds that the US is fundamentally different from and superior to other nations. We call it American Exceptionalism.

Exceptionalism thrived during the fleeting unipolar moment. With the Soviet collapse and the Cold War’s end, the U.S. bestrode the globe—for better and worse. But the hubris and mutations born of that era now blind us to our decline. Read more »

On Achieving Tennisosity

by Scott Samuelson

Though I’m at best a mediocre tennis player, I’ve achieved something in the sport that the pros achieve only at their finest, which I’ve taken to calling “tennisosity,” a hybrid of “tennis” and “virtuosity.” I coined the term several years ago, in the sweaty aftermath of a match in which my opponent and I had entered into its state. Among my friends and family, the ugly term tennisosity has stuck—I suspect because it describes something vitally yet elusively important, something with an ethical and an aesthetic dimension that can apply to any meaningful human activity.

This is NOT a picture of my backhand.

Tennisosity (in the realm of tennis) is when you and your opponent are so well-matched that the competition not only raises both of your play to a higher level but perfectly realizes the game of tennis. The way I put it to my exhausted opponent was, “We just played the 2008 Wimbledon Final”—the battle between Roger Federer and Rafael Nadal, sometimes called the greatest match ever, where Nadal won his first Wimbledon against the defending champ (Federer had won the previous five straight, including the last two against Nadal). Even though neither my opponent nor I could return the serve of your average high school varsity tennis player, our rallies were just as dramatic as Federer’s and Nadal’s, each of us got to just as many shots that the other didn’t think could be gotten to, our aces were just as glorious, and our double faults were just as tragic—at least within the context of our game.

I admit that my backhand isn’t at the level of Roger Federer’s! I’m giggling at even drawing the comparison. By the normal metrics and standards of excellence in tennis, everything about my game is junk. Also, there was no public recognition riding on my match—not even the chintzy trophy of a local tournament, much less the engraved silver of the most storied competition in tennis. In fact, nobody was watching.

And yet, had others been watching us, I believe that their aesthetic experience of our tennis match would have been similar in kind to the great Wimbledon Final—obviously not concerning our individual skillsets but concerning what the interactive combination of our skillsets involved. My opponent and I had to dig deep again and again. The same kind of grit and imagination Federer and Nadal had to draw on, we had to draw on. The glory of tennis was on display for all to see—even though nobody happened to be there to see it. If I remember right, one of us cried. Read more »