Fine Tuning Against the Multiverse?

by Tim Sommers

In “Calculating God,” Robert J. Sawyer’s first-contact novel, the aliens who arrive on Earth believe in the existence of God – without being particularly religious. Why?

There are certain physical forces, they explain, that make life in our universe possible only if they are tuned to very specific values. Which they are. We are here, after all. But there’s no physical reason that the values need to be set the way they are. The aliens have concluded that someone, or something, set the values of these parameters at the beginning of the universe to insure that life would come into existence. That something they call God.

Here’s a much earlier, very different version of this argument. If you were hiking through the woods and you picked up a shiny object that turned out to be a small stone, it would probably not occur to you that it might have been made by someone. If it turned out to be a watch, however, you would immediately conclude that it had been intentionally created. So, is the universe more like a stone or a watch?

This argument from design was an especially powerful argument for the existence of God when very little was known about biology. The complexity of living things puts watches to shame. But then Darwin came along and used evolution to explain how such diversity, complexity, and apparent design could come about without a designer.

Just when the argument that the complexity of our world could only be explained by God seemed lost, a new, purely physical reason to think that the universe was designed appeared. The one the aliens embrace.

“A striking phenomenon uncovered by contemporary physics,” Kenneth Boyce and Philip Swenson write in their forthcoming paper “The Fine-Tuning Argument Against the Multiverse,” (Philosophy Quarterly) is that many of the fundamental constants of nature appear to have arbitrary values that happen to fall within extremely narrow life-permitting windows.” Read more »

Why even a “superhuman AI” won’t destroy humanity

by Ashutosh Jogalekar

Photo credit

AGI is in the air. Some think it’s right around the corner. Others think it will take a few more decades. Almost all who talk abut it agree that it applies to a superhuman AI which embodies all the unique qualities of human beings, only multiplied a thousand fold. Who are the most interesting people writing and thinking about AGI whose words we should heed? Although leaders of companies like OpenAI and Anthropic suck up airtime, I would put much more currency in the words of Kevin Kelly, a superb philosopher of technology who has been writing about AI and related topics for decades; among his other accomplishments, he is the founding editor of Wired magazine, and his book “Out of Control” was one of the key inspirations for the Matrix movies. A few years ago he wrote a very insightful piece in Wired about four reasons why he believes fears of an AI that will “take over humanity” are overblown. He casts these reasons in the form of misconceptions about AI which he then proceeds to question and dismantle. The whole thing should be dusted off and is eminently worth reading.

The first and second misconceptions: Intelligence is a single dimension and is “general purpose”.

This is a central point that often gets completely lost when people talk about AI. Most applications of machine intelligence that we have so far are very specific, but when AGI proponents hold forth they are talking about some kind of overarching single intelligence that’s good at everything. The media almost always mixes up multiple applications of AI in the same sentence, as in “AI did X, so imagine what it would be like when it could do Y”; lost is the realization that X and Y could refer to very different dimensions of intelligence, or significantly different in any case. As Kelly succinctly puts it, Intelligence is a combinatorial continuum. Multiple nodes, each node a continuum, create complexes of high diversity in high dimensions.” Even humans are not good at optimizing along every single of these dimensions, so it’s unrealistic to imagine that AI will. In other words, intelligence is horizontal, not vertical. The more realistic vision of AI is thus what it already has been; a form of augmented, not artificial, intelligence that helps humans with specific tasks, not some kind of general omniscient God-like entity that’s good at everything. Some tasks that humans do will indeed be replaced by machines, but in the general scheme of things humans and machines will have to work together to solve the tough problems. Read more »

Poem by Jim Culleny

1/1/26 Typhoon USA Aftermath Premonition

—This is a revised version of a poem I wrote when a newly insane river running
through our town one day came
 through, almost over-toping bridges, spilling
into streets with wild abandon during Hurricane Irene. 

One year back the river tore through
on its fall to the sea, courts and laws
slid beneath a steal (sic) bridge on Maga swells,
small shops (especially) pirouetted off their base,
rushed downstream to lodge against
the remains of a constitution compromised
by Magarussian jackhammers of mad winds.

Cellars of The People filled with mendacious mud
and whatever the river had dredged, whatever it had
sucked from cesspools of Oligarchs, whatever it had
ripped from the gardens of the free, of their
summer afternoons, of their homes and barns,
earth and offal left along its banks, in basements,
in streets, in the vacant classrooms of children before
books were banned, tongues of free speech
caught in eddies of choreographed confusion.

A cornfield, tall enough the day before when its
cobbed yield might have grinned yellow from humble
plates until a typhoon of raw, privileged intent laid it low,
its proud ranks of green stalks now laid flat from
sea to shining sea by Potomac’s winnowing rake, all now
lay supine as a woman or man, after a sweet or savage life,
lie still before the sweep of a polluted sea.
.
Jim Culleny
modified, 3/14/25

Enjoying the content on 3QD? Help keep us going by donating now.

Sunday, March 16, 2025

The Rise and Fall of the Mind-Body Problem

by Katalin Balog

The mind-body problem in its current form – an inquiry into how the mind fits into the physical universe – was formulated by René Descartes in the 17th century. In his Meditations, a thin volume of philosophy that had a monumental effect on all later Western philosophy, he famously argued that it is possible to conceive of a mind without extension – a disembodied soul – and a body without thought – a mindless zombie. And since whatever can be clearly and distinctly conceived, can be brought about by God, it is possible for there to be a mind without body and a body without mind. He concludes, based on observations about our concept of mind and body, that they are really distinct. His position is called dualism, which is the view that the world has fundamental ingredients that are not physical. Of course, dualism was not original to Descartes; from Plato to the Doctors of the Church and ordinary folks, most people up until the Enlightenment have advocated it. But his way of arriving at it – examining what he can “clearly and distinctly” conceive – has set the template for all subsequent discussions. Nota bene, he had more empirically based arguments as well; for example, he thought nothing physical could produce something so open ended and creative as human speech. He would have been very surprised by ChatGPT.

According to Descartes, the mind, or soul, is an exalted thing: it is non-spatial, immaterial, immortal, and entirely free in its volition. He also thought – reasonably enough – that it interacts with the body, an extended, spatially located thing. In Descartes’ view, there are sui generis mental causes; purely physical causes cannot explain actions. Descartes held that only the quantity of motion is strictly physically determined, not its directionality. In Leibniz’s telling of the story in The Monadology, Descartes believed that the mind nudges moving particles of matter in the pineal gland, causing them to swerve without losing speed, like the car going around the corner. He had a real, substantive disagreement with his contemporary Hobbes, a materialist who thought that there were only physical causes.

Developments in science have had an enormous impact on this debate. The gist is that advances in the physical and biological sciences ultimately ruled out Descartes’s idea of the mind as a sui generis force – though not necessarily other forms of dualism which we will talk about shortly. Not only is it hard to comprehend Descartes’ idea that a non-material mind can move a material body, but such sui generis non-physical causes are ruled out by what we know about natural processes. Read more »

A Look in the Mirror

MORE LOOPY LOONIES BY ANDREA SCRIMA

For the past ten years, Andrea Scrima has been working on a group of drawings entitled LOOPY LOONIES. The result is a visual vocabulary of splats, speech bubbles, animated letters, and other anthropomorphized figures that take contemporary comic and cartoon images and the violence imbedded in them as their point of departure. Against the backdrop of world political events of the past several years—war, pandemic, the ever-widening divisions in society—the drawings spell out words such as NO (an expression of dissent), EWWW (an expression of disgust), OWWW (an expression of pain), or EEEK (an expression of fear). The morally critical aspects of Scrima’s literary work take a new turn in her art and vice versa: a loss of words is countered first with visual and then with linguistic means. Out of this encounter, a series of texts ensue that explore topics such as the abuse of language, the difference between compassion and empathy, and the nature of moral contempt and disgust. 

Part I of this project can be seen and read HERE

Part II of this project can be seen and read HERE

Images from the exhibition LOOPY LOONIES at Kunsthaus Graz, Austria, can be seen HERE

 
Andrea Scrima, LOOPY LOONIES. Series of drawings 35 x 35 each, graphite on paper; edition of postcards with text excerpts. Exhibition view: Kunsthaus Graz, Austria, June 2024.

7. EEEK

Michel de Montaigne’s famous statement—“The thing I fear most is fear”—remains, nearly five hundred years later, thoroughly modern. We think of fear as an illusion, a mental trap of some kind, and believe that conquering it is essential to our personal well-being. Yet in evolutionary terms, fear is an instinctive response grounded in empirical observation and experience. Like pain, its function is self-preservation: it alerts us to the threat of very real dangers, whether immediate or imminent.

Fear can also be experienced as an indistinct existential malaise, deriving from the knowledge that misfortune inevitably happens, that we will one day die, and that prior to our death we may enter a state so weak and vulnerable that we can no longer ward off pain and misery. We think of this more generalized fear as anxiety: we can’t shake the sense that bad things—the vagueness of which render them all the more frightening—are about to befall us. The world is an inherently insecure and precarious place; according to Thomas Hobbes, “there is no such thing as perpetual Tranquillity of mind, while we live here; because life it selfe is but Motion, and can never be without Desire, nor without Fear” (Leviathan, VI). Day by day, we are confronted with circumstances that justify a response involving some degree of entirely realistic and reasonable dread and apprehension, yet anxiety is classified as a psychological disorder requiring professional therapeutic treatment. Read more »

Friday, March 14, 2025

Salting the Earth and the Vandalism of America

by Mark Harvey

Elon Musk

To tear something down is infinitely easier than building something of benefit or beauty. Constructing an elegant house that will last through the ages, can take years.  From a dream, to design, to approvals, to construction means gobs of money, skilled designers, and dedicated builders. When you see that handsome house perched just so on a hill, with its cedar siding, cased windows, and tidy balconies, know that dozens of men and women labored and strove to get it just right—hundreds of mornings planning, sawing, hammering, painting, plumbing and polishing.

But give me a forty-ton excavator and a couple of dump trucks, and I will demolish that house and clear the site in one day. To a person of evil intent and ill mind, tearing down so much effort in so little time will be a thrill.

That’s what makes vandalism so attractive to people with festering resentments. Destroying something precious to someone else in the dark of the night is the sort of sugar rush that thrills degenerates.

When the richest man in the world takes hammer and tongs to our government and delights in tearing down agencies central to our economy, farms, public health, environment, and foreign policy—when a man-child of his accidental consequence recklessly fires thousands of public employees without knowing the first thing about government, it’s time for anyone who does love this country to stand up and call out a flat NO!

Watching Elon Musk with his strange gothic uniforms of black jackets, t-shirts and ball caps, and reading his inane tweets sprinkled with juvenile humor, brings to mind a deeply insecure adolescent. And yet, that puffed-up adolescent is tearing apart the lives of thousands of Americans directly, and millions of people worldwide as a consequence. Read more »

What Becomes Of The Femboy?

by Mike Bendzela

In a kindergarten classroom in the mid-1960s, a kid named Mikey steered clear of the boys stacking large toy blocks on top of one another and knocking them down again–so obnoxiousand instead went and sat at the table of girls making beads out of salt dough and stringing them together on a thread. These girls were not averse to tasting the salt dough and smacking their lips in disgust. The teacher had wisely settled on salt dough because she knew it wouldn’t poison the students should they eat it. At least the girls were smart and funny and didn’t continually knock each other to the floor.

Mikey preferred these sober, artsy activities–making necklaces of salt dough beads, pressing hand prints into soft clay disks, tracing the profiles of silhouetted heads projected via lamp light onto sheets of construction paper–over the rough-and-tumble of block stacking, fat-ball tossing, and floor hockey, because–well, he just did. Thus developed the central themes of his boyhood–hates sports; likes art and language; hangs out with the girls.

Throughout grade school, gym class gave him a terrible knot in his stomach and he longed to be elsewhere, a disposition cemented into place by an incident during a game of “battle ball,” in which boys stood at opposite walls and hurled large pneumatic balls at each other for God knows what reason, and a ball smacked him square in the face and knocked his glasses off his head.

The glasses allowed him to read the teacher’s flowing, cursive handwriting on the chalkboard, reading which he was good at, and he forever yearned to be allowed to pick up a long piece of chalk and tap-scratch letters onto the board himself. This desire was at long last granted, and soon he was permitted whole boxes of colored chalks to use, and the teacher allowed him and some girl friends to cover the entire chalkboard with decorative chalk drawings. Read more »

Thursday, March 13, 2025

A Tale of Two Doges: An Uncertain History

by Alizah Holstein

Jean LeClerc, Doge Enrico Dandolo recruiting for the crusade
Jean LeClerc, Doge Enrico Dandolo recruiting for the crusade (1621)

When a statement was issued last November stipulating that a new U.S. government department known by the acronym DOGE was to be formed, the medievalist in me snapped to attention. To me, “doge” was a word with distinctly medieval meaning. But hardly anywhere was this meaning being explored in the context of DOGE.

For anyone who has been living in a glacial crevasse for the past few months, DOGE stands for Department of Government Efficiency. Two phenomena are purported to have inspired the DOGE acronym. One: the cryptocurrency Dogecoin. And two: the “doge” internet meme featuring photos of a Japanese Shiba Inu overwritten with pidgin English text that played on a misspelling of the word “dog.” Both were much beloved by Elon Musk, who has to all appearances been heading DOGE.

But there’s a third association worth exploring as we consider the implications of this (unofficial) government department. Stated simply, a doge was the chief magistrate of medieval Italian maritime republics. Venice had doges for over a thousand years, from 700 until 1797 when the Napoleonic Wars brought the republic to its end. Genoa, for a shorter time, had them as well, and Pisa, too, counts a single doge in its historical register. What in Italian is doge in Venetian is doxe, and both derive from dux, the Latin word for leader and a cognate of duke.

The possible association of DOGE with a medieval magistrate has not been widely explored, but I do think it matters. Why? Because medievalism feels peculiarly salient right now in culture and politics alike. Last month at London Fashion Week, models strutted in chain mail and armor, while one carried a decorative sword.[1] Castlecore, which offers “a nostalgic ideal of luxury and wealth,” is trending on social, and romantasy sells.[2] Medievalism has never been very far from the American imagination, but in this moment it feels top of mind. Read more »

Should AI Speak for the Dying?

by Muhammad Aurangzeb Ahmad

Everyone grieves in their own way. For me, it meant sifting through the tangible remnants of my father’s life—everything he had written or signed. I endeavored to collect every fragment of his writing, no matter profound or mundane – be it verses from the Quran or a simple grocery list. I wanted each text to be a reminder that I could revisit in future. Among this cache was the last document he ever signed: a do-not-resuscitate directive. I have often wondered how his wishes might have evolved over the course of his life—especially when he had a heart attack when I was only six years old. Had the decision rested upon us, his children, what path would we have chosen? I do not have definitive answers, but pondering on this dilemma has given me questions that I now have to revisit years later in the form of improving ethical decision making at the end-of-life scenarios. To illustrate, consider Alice, a fifty-year-old woman who had an accident and is incapacitated. The physicians need to decide whether to resuscitate her or not. Ideally there is an advance directive which is a legal document that outlines her preferences for medical care in situations where she is unable to communicate her decisions due to incapacity. Alternatively, there may be a proxy directive which usually designate another person, called a surrogate, to make medical decisions on behalf of the patient.

Given the severity of these questions, would it not be helpful if there was a way to inform or augment decisions with dispassionate agents who could weigh in competing pieces of information without emotions coming in the way? Artificial Intelligence may help or at least provide feedback that could be used as a moral crutch. It also has practical implications as only 20-30% percent of the general American population has some sort of advance directive. The idea behind AI surrogates is that given sufficiently detailed data about a person, an AI can act as a surrogate in case the person is incapacitated, making decisions that reflect what the person would have taken if they were not incapacitated. However, even setting aside the question of what data may be needed, data is not always a perfect reflection of reality. Ideally this data is meant to capture a person’s affordances, preferences, and more preferences, with the assumption that they are implicit in the data. This may not always be true, as people evolve, change their preferences, and update their worldviews. Consider a scenario where an individual provided an advance directive in 2015, yet later converted to Jehovah’s Witness—a faith that disavows medical procedures that involve blood transfusions. Despite this profound shift in beliefs, the existing directive would still reflect past preferences rather than current convictions. This dilemma extends to AI-trained models, often referred to as the problem of stale data. If conversational data from a patient is used to train an AI model, yet the patient’s beliefs evolve over time, data drift ensures that the AI’s knowledge becomes outdated, failing to reflect the individual’s current values and convictions.

Many of the challenges inherent in AI, such as bias, transparency, and explainability, are equally relevant in the development of AI surrogates. Read more »

Wednesday, March 12, 2025

The Limits of American Exceptionalism

by Bill Murray

I.

A hundred years ago two battered and beleaguered old men, one an Italian prisoner, the other taken to wandering Irish bogs, arrived at the same fateful truth: the world around them was collapsing.

Antonio Gramsci, Marxist theorist, imprisoned member of the Italian parliament, wrote from his cell that “the old world is dying, and the new cannot be born; in this interregnum, a great variety of morbid symptoms appear.”

William Butler Yeats, a sort of mystic horrified by violence and uncertainty across his Irish homeland, saw the same future. From a cottage in County Galway he wrote “things fall apart; the center cannot hold.” What was worse, “the best lack all conviction, while the worst are full of passionate intensity.”

One a political theorist dissecting history’s brutal transitions, the other a poet divining the chaos of human nature, both foresaw the same truth—that societies don’t glide one era to the next. Sufficiently stressed they shatter, and from the wreckage, new orders struggle to emerge. A hundred years on, Gramsci and Yeats’s alarmed realizations read less like prophecy than real time commentary.

Today’s Americans grew up confident that they stood outside history’s cycles of rise and fall. Battered but buoyed by victory in World War II (and recognizing an opportunity), the United States built a powerful international system meant to foster global stability and economic growth while, naturally, serving its own interests. And, in a world laid waste by war, so it did.

For the longest time this system forestalled large-scale conflicts, allowing America and its allies to prosper.

That system has played itself out.

It was easy enough for the mighty and victorious United States to stamp its model on a war-exhausted world. Turns out, maintaining that system indefinitely in a restive world is challenging.

Gramsci speaks outside of time straight to this week that the old is dying. As a new alignment struggles to be born, just as he said: witness the morbid symptoms.

American legend still holds that the US is fundamentally different from and superior to other nations. We call it American Exceptionalism.

Exceptionalism thrived during the fleeting unipolar moment. With the Soviet collapse and the Cold War’s end, the U.S. bestrode the globe—for better and worse. But the hubris and mutations born of that era now blind us to our decline. Read more »

On Achieving Tennisosity

by Scott Samuelson

Though I’m at best a mediocre tennis player, I’ve achieved something in the sport that the pros achieve only at their finest, which I’ve taken to calling “tennisosity,” a hybrid of “tennis” and “virtuosity.” I coined the term several years ago, in the sweaty aftermath of a match in which my opponent and I had entered into its state. Among my friends and family, the ugly term tennisosity has stuck—I suspect because it describes something vitally yet elusively important, something with an ethical and an aesthetic dimension that can apply to any meaningful human activity.

This is NOT a picture of my backhand.

Tennisosity (in the realm of tennis) is when you and your opponent are so well-matched that the competition not only raises both of your play to a higher level but perfectly realizes the game of tennis. The way I put it to my exhausted opponent was, “We just played the 2008 Wimbledon Final”—the battle between Roger Federer and Rafael Nadal, sometimes called the greatest match ever, where Nadal won his first Wimbledon against the defending champ (Federer had won the previous five straight, including the last two against Nadal). Even though neither my opponent nor I could return the serve of your average high school varsity tennis player, our rallies were just as dramatic as Federer’s and Nadal’s, each of us got to just as many shots that the other didn’t think could be gotten to, our aces were just as glorious, and our double faults were just as tragic—at least within the context of our game.

I admit that my backhand isn’t at the level of Roger Federer’s! I’m giggling at even drawing the comparison. By the normal metrics and standards of excellence in tennis, everything about my game is junk. Also, there was no public recognition riding on my match—not even the chintzy trophy of a local tournament, much less the engraved silver of the most storied competition in tennis. In fact, nobody was watching.

And yet, had others been watching us, I believe that their aesthetic experience of our tennis match would have been similar in kind to the great Wimbledon Final—obviously not concerning our individual skillsets but concerning what the interactive combination of our skillsets involved. My opponent and I had to dig deep again and again. The same kind of grit and imagination Federer and Nadal had to draw on, we had to draw on. The glory of tennis was on display for all to see—even though nobody happened to be there to see it. If I remember right, one of us cried. Read more »

Tuesday, March 11, 2025

Touching Words: on Poetry in Memoir

by TJ Price

At a certain point, all memory is fiction. What we retain of the past is selective—our brain typically glosses over the finer details—even the substance of it is subject to change. Our past, much like our present and future, is fluid, constantly running, and not even Memory can step in that same river twice. The memoir, however, attempts to fix what is in flux, to render still the dynamic motions of the past. Often, I find that reading memoir comes with a sense of forced progression—a somber plod of narrative, marrying recollections of a life to the dramatic scaffolding of the Hero’s Journey—but lately, I have discovered a wholly new avenue of the telling, and in a surprising place: poetry.

I first heard of The Braille Encyclopedia: Brief Essays on Altered Sight, by Naomi Cohn (in a gorgeous paperback publication by Rose Metal Press) by way of a friend, currently researching altered sight for a thesis. Braille is, of course, the writing system of the blind, consisting of little raised dots in a matrix, each arrangement translating to a letter, or sometimes an entire word. What I didn’t know about it, however, as I learned from the excerpt that my friend posted, was that Louis Braille had been accidentally blinded at the tender age of three years old in his father’s workshop, with an awl. Cohn remarks on this, drawing a breathtaking association: “Is it an accident that my tool for making hand-punched braille is so much like an awl?”

It’s a curious book, not easily categorized. As the subtitle outlines, it is a collection of “brief essays,” arranged in the format of an imaginary encyclopedia, with each entry ranging from personal anecdote or recollection to etymology and jargon all the way to scientific fact and even Yiddish. These, more than anything, are poems—some of them are only a few lines, a paragraph—and each entry uses its title as a kind of homing beacon, returning back to it again and again to create a beautiful resonance underneath not only each “essay,” but layers of the same beneath the book in its entirety. The voice guiding the reader is frank, but also wry, and uniquely confessional. In so narrating the personal details and arranging them in this abecedarian manner, it overflows and touches far more stories than just its author’s own. The experiences belong to the writer, but the poetry used to convey them expands past this, and even beyond an inquiry of the visual sense itself, opening new avenues of thought via ontological questions of perceptions and perceiving, and even being perceived. Indeed, even the nature of reading (in all its many forms) is interrogated—how the endless permutations and combinatorics of language can transmogrify in the crucible of the mind.  Read more »

Carmina Baloney

by Steve Szilagyi

AI renders its impression of “O Fortuna”.

Those first eight thunderous notes—”Oh Fortuna, velut Luna”—delivered by a massive choir of a hundred voices, have become as instantly recognizable as Beethoven’s da-dah-dah-DUM or the opening of Strauss’ Also Sprach Zarathustra. Since John Boorman first deployed O Fortuna in his 1981 film Excalibur, this choral juggernaut has stampeded through many horror films, parodies, videos, and cartoons—always to soul-chilling effect.

There’s no denying O Fortuna’s raw power. The chanting choir, the pounding drums, the ascending shrillness of the voices: it creates the unsettling sensation that an army of insane monks is closing in on you from the dark corners of a satanic cathedral. It is the musical embodiment of the Burkean sublime: an encounter with vast, uncontrollable, and terrifying forces.

This two-minute masterpiece by German composer Carl Orff, which opens and closes his 1936 “scenic cantata” Carmina Burana, has made the entire 60-minute work into one of the more frequently performed pieces of classical music worldwide. Dozens of professional, academic, and community music groups across America will stage it this year—no small feat considering the orchestra size and number of singers required. For symphony orchestras and concert halls, Carmina Burana has become a reliable cash cow, more valuable than many other popular favorites because it attracts that rarest and most coveted entity in the classical music world: the under age-65 audience. Read more »

Monday, March 10, 2025

A Sort Of A Job

by Richard Farr

This is ChatGPT’s idea of my idea of me doing philosophy. It couldn’t get the boulder right and refused to make my physique more realistic.

As everyone knows, the word philosopher comes from two Greek words — philo, a rich, buttery pastry and by extension a person with a weakness for any self-indulgence, and sofa, a couch. Hence: a person who’d love to find a comfortable chair.

If you are a plumber or a tax attorney, or maybe an epidemiologist specializing in tropical blood diseases, most random strangers will understand in a broad way what you do for a living and why it is that someone else is prepared to pay you for doing it. Even if you work in a university and teach poetry, or the extinct fauna of the Oligocene, no great mystery. Even people who think it’s a complete waste of time will understand roughly why other people don’t.

Philosophy, on the other hand.

I was in my late twenties, Ph.D. still fresh-baked and steaming. Not yet accustomed to being addressed as Professor, I sat on the bus next to a complete stranger one day and had a conversation with him that went something like this: 

“A philosopher, eh? Really! So what’s your philosophy then?”

Uh-oh! How to begin? How to navigate the truly remarkable fact that in our culture it’s typical even for highly educated people to signal thus that they have never encountered this once-central thread in our civilization’s story? That they have literally no idea what the subject / field / discipline called “philosophy” is

I choose a poor way to begin. “I specialize mainly in modern political ideas. And ethics.”

“You teach politicians to be ethical?”

“No no! That would not be — well, I supposed it would be logically possible, even nomologically possible. But — anyway, no. That’s not it.”

“So tell me more about what you do.”

“I spend a lot of time on normative ethics.”

“Eh?” Read more »

The Loneliness of the Football Player

by David J. Lobina

It can be lonely being a football player, especially when the ball is rolling.

Football, not Gridiron.

I live on the wing, my natural habitat. As close to the touch line as possible, old-fashioned. No-one really understands me. I think my teammates live in a completely different world to my own. I can track the movements of the strikers and even anticipate what they’ll do with the ball when they get hold of it, but it is all foreign to me. I can track the central midfielders better and more closely, as these are the people who make sure the ball reaches me ever so often, but their general motivations are equally inscrutable. The goalkeeper and the defenders are even more of a mystery; I’m not sure throwing your body to the ground like they do is always necessary, but I am sure that I cannot do it quite like that myself. The other winger is the closest thing to having a twin, but one that is the exact opposite in every way. Every player is their own person here, with movements and motivations unlike those of the others. We are a team in name only; more like a collection of 11 inlets.

It always all starts in the midfield with the opening kick. I am on the wing and do not expect to see the ball for a good few minutes. The strikers get things going by kicking the ball backwards to the midfielders, and then mechanically field towards the goal without a worry, so eager are they to reach their own natural habitat – away from it, and they look lost. The midfielders start their routine of not wanting to have any kind of responsibility by getting rid of the ball as soon as they receive it, lest they make any mistake that might need the attention of the defenders or even the keeper. Defenders patiently wait for these mistakes; they would wish them into being from time to time if they could, in fact. Goalkeepers would wish defenders’ mistakes into being, in turn; the more the merrier, in fact. One striker tends to be more artistic than the other and ventures into the midfield on occasion in order to try out things for art’s sake and with only aesthetic objectives in mind, in a trial-and-error kind of fashion; success or fail matters not, it just needs to always look pretty. Read more »

Poem by Jim Culleny

Rapprochement:

Just wondering
if worlds seen from a distance
are really smaller than they are. . .
And could it be that when we sleep
the world we leave goes on without us?
..
Maybe you remember the old days too,
days when greenhorns multiplied their joys
and were thoughtless as a new moon.
Is it possible that from above
everything is seen through a rose window
bright as Venus, or is nothing left
to be seen between us?
..
Maybe you needed to spend some time
on your island being re-tuned.
Or maybe you were thinking I’d be
shoveling snow this morning
on the cusp of spring while you
relaxed on a breezy beach in sun.
..
But what I’d like to know is if
maybes still exist or if tomorrow
is so sure a thing.
..
So, are you still counting coup
on the enemies of the morning dew?But since I haven’t heard, I thought
I’d tell a newer tale— one of
thoughts we’ve never played,
thoughts naked as new babes
born today.

BTW, have you noticed something odd?
Nothing ever changes but the color
of the feather in the hat-band of God?

Could I
ask? Would you
reply?

What the
hell?

Could you ever say you caught a final
glimpse of the ghosts you fought?

You didn’t say, but I suspect
you’re still heaped in words,
a cornucopia of clever tangles
in our alphabet.

It’s possible this is one more mistake,
as if even God is not perfectly awake.

For what it’s worth, breakfast is the best meal of the day—
the sun’s a fresh egg, clouds are white albumin.
Ahead? —a pot with lots of room to stew in.

Guess I could just say something
to circumvent our broken
bridges of contention, and wrenched beams,
and frayed cables of suspension.If it’s not too much to ask,
how’s your weather? Mine is fine,
and yours I hope is
even better.

by Jim Culleny
3/22/13 Rev 3/9/25

Enjoying the content on 3QD? Help keep us going by donating now.

Sunday, March 9, 2025

Real Life: On Abbas Kiarostami’s “Close-Up”

by Derek Neal

Close-Up, a 1990 Iranian film directed by Abbas Kiarostami, is one of the rare films where the viewing experience is enhanced by knowing certain details beforehand.

The movie opens with a scene in a taxi. A journalist is in the front seat while two armed military police officers sit in the back. The journalist explains to the driver that they are on their way to arrest a man who has been impersonating the filmmaker Mohsen Makhmalbaf. So far, so good. But what you don’t realize, unless you’re familiar with the film, is that most of these people are not actors. The journalist is a journalist and the police officers are police officers. So the director is going for realism, eschewing the use of professional actors in the manner of Bresson? Not quite. Makhmalbaf is a real Iranian director, and someone really did impersonate him—this is a true story, and many of the people in the film play themselves. The journalist is the real journalist who broke the story, which brought it to the attention of Kiarostami, leading him to make the movie. The officers are the real officers who arrested the impersonator. They are on their way to the real house of the family whom the impersonator conned, and the family as well as the impersonator play themselves, too. Everything in the film really happened—this is real life, close up. Or is it? Does filming something change it? Does a reenactment alter the original act? Can a copy replace the original? What is real and what is make believe, and can we cross back and forth between the two realms? Can one exist without the other? These are the questions the film presents to its viewers.

In the taxi on the way to the Ahankhah residence, where the impersonator will be arrested, the journalist asks the taxi driver if he knows the director Makhmalbaf, to which he responds, “I don’t have time for movies. I’m too busy with life!” Later, when Kiarostami tells the judge who will preside over the case that he would like to film the trial, the judge tells him, “I took a look at this case, and I don’t see anything worth filming.” The judge and the taxi driver insist on the difference between movies and real life, or more broadly, art and reality. Kiarostami seems to have something else in mind.

In the former scene, the scene in the taxi, the journalist Farazmand is playing himself whereas the taxi driver is portrayed by an actor (at least, he’s not listed as playing himself in the opening credits). This conversation, then, may not have really occurred, although the drive to the Ahankhah residence certainly did. Kiarostami has presumably inserted this dialogue to make the viewer question what is presented on the screen. Is it a movie, or is it life? The scene with the judge is also a reenactment, although this time all the characters play themselves. The judge is the real judge and Kiarostami, off camera, asks him questions. We are inclined to believe that this dialogue did take place, that the judge did question the worth of filming such a simple trial. But did he? There’s no way to know, and attempting to find out the truth only leads to more questions, as I discovered while researching the movie.

The film itself, contrary to the opinion of the taxi driver or the judge, proposes that something becomes worth filming when it is filmed; in other words, the act of representation itself makes its subject worthy of representation. No external explanation is needed other than the resulting piece of art, which, once it has been created, becomes a part of real life. Read more »