The Old Guard: Confronting America’s gerontocratic crisis

Samuel Myon in Harper’s Magazine:

In Greek myth, Eos falls in love with Tithonus. She is the goddess of the dawn. He is a Trojan prince, yet still a mere mortal. Eos asks Zeus to give her mate the gift of eternal life—­but, foolishly, she forgets to ask for eternal youth too.

Tithonus never dies; he just grows older and older. “Ruthless age,” goes the Homeric hymn recounting his story, is “dreaded even by the gods.” Tithonus becomes more decrepit and wizened with each passing year. Eventually, when he can no longer move, Eos has to shut him away, in a place where “he babbles endlessly, and no more has strength at all.” Eternal life amid the decline of one’s faculties is not a blessing but a curse. “Me only cruel immortality / Consumes: I wither slowly in thine arms,” Tithonus complains in Alfred Tennyson’s rendition of the myth (published in these pages in 1860), in a rare moment of lucidity that emerges from his everlasting gibberish.

The story of Tithonus no longer feels so outlandish, because our society postpones death to an unprecedented degree. Unlike immortals, we still pass. But the great majority of us, and not only the bad, now die old. In whatever nursing home he was parked in, Tithonus must have looked much like we increasingly do, as doctors continuously defer our mortality. We are approaching a time when a legion of Tithonuses will live in our midst. We have already felt the social and political consequences.

More here.

Enjoying the content on 3QD? Help keep us going by donating now.

Sunday, April 19, 2026

March into the Ruins

Bruce Robbins in The Baffler:

In a documentary i saw some years ago, I remember Jürgen Habermas, when asked to describe his friend the writer and filmmaker Alexander Kluge, responded that for all his life Kluge remained the person who was bombed as a child. Kluge was thirteen in April 1945, living with his parents in the beautiful medieval city of Halberstadt in Germany as World War II drew to a close. American troops were a day or two away from entering the city when U.S. B-17 bombers flew over and all but demolished it, killing some two or three thousand civilians. Kluge wrote about that day in Air Raid. The book, written in the 1970s but untranslated until 2014, begins with a ticket-taker in the local Halberstadt cinema, who is trying valiantly to sweep the rubble out of the aisles in time for the afternoon show when half the building has just been blown apart and the basement is crowded with corpses.

The moral seems to be that, like the ticket-taker, we try to keep to old habits when our world has exploded. The book is permeated by a sense of the absurd which, for all its indignation, somehow also leaves something to be savored. Kluge never seems to fit neatly within the philosophy he took from his teacher Theodor Adorno or, for that matter, the strenuously produced normative propositions of his friend Habermas, that other late-blooming flower of the Frankfurt School. Kluge wrote an obituary for Habermas, who died on March 14, days before his own death on March 25.

After studying modern history, music, and law in Frankfurt—he briefly served as the Frankfurt Institute’s legal counsel—Kluge began a career as an experimental filmmaker. His early films got him described by some as the German Godard, though he was less interested than Godard in placing himself within, and disrupting, cinematic tradition and more focused on exploring the particular squalor of his country’s recent past.

More here.

Enjoying the content on 3QD? Help keep us going by donating now.

The Social Edge of Intelligence

Bright Simons in The Ideas Letter:

We are on the verge of the age of human redundancy. In 2023, IBM’s chief executive told Bloomberg that soon some 7,800 roles might be replaced by AI. The following year, Duolingo cut a tenth of its contractor workforce; it needed to free up desks for AI. Atlassian followed. Klarna announced that its AI assistant was performing work equivalent to 700 customer-service employees and that reducing the size of its workforce to under 2000 is now its North Star. And Jack Dorsey has been forthright about wanting to hold Block’s headcount flat while AI shoulders the growth.

The trajectory has a compelling internal logic. Routine cognitive work gets automated; junior roles thin out; productivity gains compound year on year. For boards reviewing cost structures, it is the cleanest investment proposition since the internal combustion engine retired the horse, topped up with a kind of moral momentum. Hesitate, the thinking goes, and fall behind.

But the research results of a team in the UK should give us pause. In the spring of 2024, they asked around 300 writers to produce short fiction. Some were aided by GPT-4 and others worked alone. Which stories, the researchers wanted to know, would be more creative? On average, the writers with AI help produced stories that independent judges rated as more creative than those written without it.

So far, so on message: a familiar story about the inevitable takeover by intelligent machines. But when the researchers examined the full body of stories rather than individual ones, the picture became murky. The AI-assisted stories were more similar to each other. Each writer had been individually elevated; collectively, they had converged. Anil R  Doshi and Oliver Hauser, who published the study in Science Advances, reached for a phrase from ecology to explain this: a tragedy of the commons.

Hold that result in mind: individual gain, collective loss. It describes something far more consequential than a writing experiment—it describes the hidden logic of our entire relationship with artificial intelligence. And it suggests that the most successful organizations of the coming decade will be the ones that do something profoundly counterintuitive: instead of using AI to eliminate human interaction by firing droves of workers, they will use it to create more human interaction. IBM has reversed course on its earlier human redundancy fantasies. I bet more will in due course.

More here.

Enjoying the content on 3QD? Help keep us going by donating now.

Purposeful Predictions

Ben Recht over at his substack, arg min:

Every engineer and scientist knows there is a fundamental difference between a “simulation” and a “prediction,” but what is the root of that distinction? At the highest level, we contrast simulation against black-box modeling. Simulations are typically thought of as “transparent boxes” where we can describe the intent of each part of the model that produces a forecast.

A roboticist might think of a simulation as a computer system designed to integrate the differential equations that define basic laws of physics. For example, you predict the path the airplane takes based on physical models of lift and drag and how the plane moves under different control settings. Simple simulations based on reduced equations might suffice for some tasks. For others, we might have to rely on computational fluid dynamics to truly capture the behavior we’re after.

The transparent box becomes murky when systems are too complex to predict precisely. Many designers accept adding randomness to their simulations, provided they can characterize the statistical models as plausible. The dynamics of coin flipping are too hard to capture precisely, but we’re usually fine with a random number generator that produces an even number of heads and tails. Noise in measurement devices often reliably has statistics that match those of Gaussian or Poisson random numbers, and such stochastic processes are reasonable stand-ins for the sorts of signals we’ll encounter in the wild. Maybe you can simulate elections based on random numbers derived from current polling results.

Where do we draw the line between sampling and simulation?

More here.

Enjoying the content on 3QD? Help keep us going by donating now.

Why AI Needs A Sense Of Smell

Philip Maughan in Noem:

Over the last few years, breakthroughs in AI have been almost too numerous to track. Chatbots can now pass the same exams required of doctors and lawyers. A cancer drug designed by AI has entered clinical trials. AI agents are serving as autonomous personal assistants. There have even been reports that AI can smell. “Computers Are Learning to Smell,” declared The Atlantic. “AI is digitizing our sense of smell,” according to the World Economic Forum. “AI tastebuds are better at identifying what’s in food than you,” claimed TechRadar, while a spellbound BBC Future reported that “An AI started ‘tasting’ colours and shapes.”

The truth, however, is that these headlines grossly embellish AI’s abilities. If you read the BBC Future article closely, for example, you’ll learn that a large language model (LLM) repeated the associations humans make between tastes, colors and shapes — sweet things are pink and round; sour things are yellow — observations that were captured in its training data. The reality is that very little progress has been made toward giving AI a sense of smell because pretty much nobody working in AI cares.

More here.

Enjoying the content on 3QD? Help keep us going by donating now.

How to train your brain to see possibility instead of doom

Hannah Critchlow in The Guardian:

It can feel as though the world is tilting towards chaos: political shocks, economic instability, technological upheaval and a constant stream of bad news. Faced with so much uncertainty, many of us default to a sense of impending doom. But is that reaction hardwired – or can we train ourselves to keep a more open mind? A useful starting point is humility. Every generation, it seems, believes it inhabits uniquely turbulent times, as literary epics down the ages testify. Uncertainty has always been part of the human condition, and none of us can really know what tomorrow holds.

Yet recognising this does not make it easy to bear. In fact, our brains are exquisitely sensitive to uncertainty. From a neuroscientific perspective, unpredictability is costly. The brain is an energy-hungry organ that relies on following patterns and habits in order to conserve effort. When faced with ambiguity, it must work harder – analysing, predicting, recalibrating. This extra effort is not just tiring; it can feel actively unpleasant.

More here.

Enjoying the content on 3QD? Help keep us going by donating now.

Sunday Poem

Why I Write Poetry

Because I can’t trust God
to look after the world and my friends.
Worship sure, wandering forests of legend
braiding flowers from the Tree of Life in my hair
while God’s beard storms overhead.
But not trust. People die. Everyone dies.
It may be God’s will but it’s my won’t.
Sea turtles live a thousand years.
My words can’t become flesh.
My words can’t heal an open wound.
But I am a poet and I know we need more time
to make our own huge splendid mistakes,
mistakes we deserve, not just the small clinical mistakes
built into out bodies.
We could have many-colored rings spinning around our minds
like the rings of Saturn.
We could map constellations around a lover’s face
and every child could be the Messiah
because the world always needs saving.
God, it is a very beautiful world,
but no thank you, it is not enough.
No thank you for the sunrise when our eyes go blind.
A blank page is a place to list the creation
we weren’t given. A shopping list of eternity
where we’re never too sick to swallow fresh blueberries
and where the dance never ends.
A blank page is a paper bird to fold up and fly.
I can’t change anything but I am a poet
and if I can’t trust God I must speak
for the world and my friends.
Want more. Want so much more.
Test each day and night for ripeness
like a melon at the market.
You’re crucified on the hands of a clock,
pull out those nails.
I’m throwing you a rope of words.
Hold on.

by Julia Vinograd

Enjoying the content on 3QD? Help keep us going by donating now.

Friday, April 17, 2026

On the Joys of Collecting Junk

Kate Bowler at Literary Hub:

It’s beautiful in North Carolina in March, which means that Zach has set out to use his metal detector in the woods near our house. He is certain that we are about to embark on a new journey as a family: owning our own junkyard.

I tried to explain that a family who owns a junkyard near the woods is actually the premise of a recent bestselling memoir in which the heroine needs to be rescued from her family and taught to read. But to no avail. Yesterday he found a 1936 Chevrolet hubcap and I am done for.

I canvass friends for opinions on whether garbage will add to my quality of life or whether I will simply, you know, incur the wrath of my new neighbors. My friends, being my friends, invariably champion the necessity of objects piling up in my yard. My friend Alex tells me about his friend, a French artist in Russia, whose preferred canvas for paintings is old doors and bits of fencing.

More here.

Enjoying the content on 3QD? Help keep us going by donating now.

AI Alignment Is Impossible, not just in practice but in theory

Matt Lutz at Persuasion:

Unfortunately, I’m pretty sure that AI alignment is impossible.

How might an AI form a moral sense? There are basically two scenarios. In one scenario, moral facts are the kind of fact that one might simply figure out by thinking about them hard. In such a case, perhaps AIs would be good moral reasoners, and indeed even better moral reasoners than humans, in virtue of their advanced intellectual capacities.

In the second scenario, moral facts aren’t the sorts of things we can figure out by pure intellectual effort, but we can nonetheless train AIs to develop a moral sense in much the same way we train children in good behavior: by rewarding them when they’re good and punishing them when they’re bad.

The first scenario is doomed, for reasons first pointed out by the philosopher David Hume in his oft-quoted (and oft-misunderstood) passage where he indicates that there is a gap (not Hume’s term) between “is” and “ought.” Hume thought that reasoning is not some sort of truth-generator, a special faculty that takes intellectual effort as an input and spits out knowledge as an output. Rather, it is a process, where we move from one thought to the next, with our later thoughts hopefully (though not necessarily) supported by our earlier thoughts.

But the process is fallible. After all, if we are to reason our way to a moral conclusion, we must be reasoning from non-moral conclusions. Taking that into account, what operation of the mind could possibly take us from premises that describe the world to conclusions that tell us how to act?

More here.

Enjoying the content on 3QD? Help keep us going by donating now.

These Chimps Began the Bloodiest ‘War’ on Record and No One Knows Why

Carl Zimmer at the New York Times:

On Thursday, a group of researchers reported that the Ugandan chimps are locked in a primate version of civil war. Two factions split about a decade ago and have been engaged in a highly lethal conflict ever since.

Scientists have never seen such widespread, long-running bloodshed among chimpanzees. Further studies may shed light on the roots of warfare in our own species, although the Trump administration’s proposed budget, released on Friday, has cast doubt on whether the research will continue.

When scientists first started tracking the Ngogo chimpanzees, the first thing that struck them was the sheer number of apes: over 100 across a territory of about 10 square miles.

More here.

Enjoying the content on 3QD? Help keep us going by donating now.

The Dog’s Gaze

Kathryn Hughes at The Guardian:

Thirty-five thousand years ago, in the Ardèche region of France, Paleolithic artists drew a spectacular bestiary on the walls of the Chauvet cave. Their focus was apex predators, so there were lots of lions, as well as mammoths and woolly rhinoceroses. Dogs were nowhere to be seen, and yet in the soft sediment on the limestone floor of the cave, there are traces of canid pawprints next to human footprints. Two fellow creatures, most likely a boy and a dog, stood together, about 10,000 years after the art was made, looking up at the walls in wonder. Here was a moment of shared contemplation, followed perhaps by a glance to see the other’s reaction.

In this luminous book, the American cultural historian Thomas Laqueur explores what he calls “the dog’s gaze”. The dog was the first animal to live companionably with humans, and Laqueur argues that this marks the boundary between nature and culture. It is this threshold status that has, in turn, qualified the dog to play a rich, symbolic part in western art. Just having dogs in a picture – snuffling for picnic crumbs in Seurat’s La Grande Jatte or trooping home in Bruegel the Elder’s Hunters in the Snow – becomes a way for an artist to pack an image with extra resonance and second-order meaning.

more here.

Enjoying the content on 3QD? Help keep us going by donating now.

Among the Antigones

Rhoda Feng at The Paris Review:

For a few weeks this spring, you couldn’t swing a thyrsus in New York without hitting a play about Antigone. Perhaps it started with Robert Icke’s Oedipus, the Broadway production from February, which featured a modern-day Antigone as a sulky teen who little suspects that her father is also her brother. Soon after, four different theaters across the five boroughs staged their own renditions of Sophocles’s famous play, reimagining his two-thousand-and-five-hundred-year-old mythic figure as, variously, a pregnant teenager, an analysis patient, an incestuous home renovator, and a freedom fighter in a fascist regime in the future. The latter, in a bid to underscore the theme of rebellion across the ages, went so far as to include audio from the ICE raids in Minneapolis.

It’s not hard to hazard the reasons for the renewed popularity of the Theban protestor who challenges the authoritarian rule of her uncle, King Creon, and is subsequently put to death. (One production titled its director’s note “Caution to the Resistance …”) But it is curious that, among the many iterations of Antigone now at hand, each has striven so forcefully to recast and reimagine her for the modern era.

more here.

Enjoying the content on 3QD? Help keep us going by donating now.

How AI Can Beat Cancer

Cyriac Roeding in Time Magazine:

The core problem in oncology has always been one of discrimination. Cancer cells and normal cells are, at the molecular level, nearly identical. What distinguishes a cancer cell is dysregulation, a set of genetic switches flipped in the wrong direction, causing uncontrolled growth. For decades, finding and exploiting those switches required hunting through patient samples by hand, looking for patterns subtle enough to be almost invisible.

AI has changed what’s possible. Systems trained on genomic databases spanning tens of thousands of sequenced cancer samples can now identify the master regulatory patterns that are active specifically in cancer cells and not in surrounding healthy tissue. Unlike the biomarkers of older precision oncology, these are fine-grained genomic signatures that encode the difference between malignant and normal at the level of how genes are switched on and off.

More here.

Enjoying the content on 3QD? Help keep us going by donating now.

Revealed: how male and female brain cells differ in gene activity

Miryam Naddaf in Nature:

By analysing more than a million brain cells, researchers have uncovered widespread differences in patterns of gene activity between male and female brains.

The work, which defined sex on the basis of a person’s combination of sex chromosomes, could help to explain why the risk of developing some brain conditions — such as schizophrenia and Alzheimer’s disease — differs between males and females.

Although the differences were subtle, the team identified more than 100 genes that showed consistent variation in their expression between males and females across several brain regions. The work was published on 16 April in Science1.

More here.

Enjoying the content on 3QD? Help keep us going by donating now.

Thursday, April 16, 2026