The Black Atlantic as Intersubjectivity

by Herbert Harris

I met Paul Gilroy at a conference on racial identity at Yale in the early 1990s. I was finishing my training and eager for new ideas. He was soft-spoken and thoughtful, but his presentation was quietly electrifying. He seemed to be rethinking race, culture, and identity in a radically creative way. The presentation distilled many ideas he would soon publish in The Black Atlantic: Modernity and Double Consciousness, a book that has influenced the field for more than thirty years. A key argument is that, over centuries of the Atlantic slave trade and beyond, a transnational culture has emerged that isn’t solely African, American, Caribbean, or British, but a blend of all these. It arose from the history of slavery and colonialism, but what holds it together isn’t its shared history or ongoing oppression. Gilroy argues that this common culture, which he calls the Black Atlantic, is maintained through the continuous movement of people, ideas, and creative works across the ocean. Its fluidity, hybridity, art, music, and literature are its defining features.

As I revisited Gilroy’s ideas over the years, they grew more impressive in their explanatory and predictive power. The Black Atlantic feels more alive and enduring than many nations, cultures, and institutions. Yet the question remains: how did it attain that durability, and how did art and music play such a central role in its flourishing?

Living in multiple subjectivities would seem bound to produce conflict and fragmentation. The Black Atlantic is nothing if not a plurality. In collectives such as nations and cultures, the multiplicity of subjectivities would seem to put them at constant risk of coming apart. W.E.B. Du Bois introduced the concept of double consciousness to capture this tension. Gilroy embraced double consciousness and hybridity as constitutive features of the Black Atlantic, not as problems to be overcome but as sources of its vitality. What kind of psychology could make this possible? What enables hybrid identities to flourish rather than fragment?

The idea of intersubjectivity, a shared world constituted by mutual recognition, may provide an explanation. Read more »

Sunday, March 1, 2026

C. Thi Nguyen’s The Score (And My Quest For The Perfect “Baguette”)

by Mark R. DeLong

Cover of The Score. The cover's background is a picture of a cloudy sky, with wispy cloud on bright blue. Three dashboard-like gauges are superimposed on the background.C. Thi Nguyen’s The Score: How to Stop Playing Somebody Else’s Game (Penguin, 2026; bookshop.org) arrived in my mailbox just in time. I was feeling that finally, after months of practice and oven-play, I was about done “perfecting” a bread. At the beginning of 2025, I had resolved to “perfect three bread recipes.” I wound up the year with just one so-called perfect bread, and the second was in the process of getting there. The second bread? The French baguette. But what I was pulling out of the oven in December 2025 and January 2026 was a distant cousin—a pleasingly plump version of the slim, stick-like baguette. I could hear the French baker cry, Monsieur, the bread you bake is not le baguette classique. N’est-ce pas? But the baker’s tears wouldn’t move me; my perfect “baguette” could not be a mere footnote to a rigid standard. (Some probably would call my version a bâtard, but that word of course means “bastard” and I shy from it, even though my perfect bread turned out to be a real bitch to discover.) I found that Nguyen’s book gave shape to the story I tell myself of the year-long experience with beguilingly simple, quite sticky, and enormously challenging (and fun) calculations I made for the best bread in the world.

About halfway through the book, Nguyen lays out a particularly tight relationship between rules—”algorithmic rules” in particular—and recipes. My baking experience had resonated through the preceding chapters, but in that section of the book Nguyen tightened the connection.

“My mother was an excellent cook,” he writes. “She learned to cook not from cookbooks and recipes, but from her family and friends in Vietnam.” But, unlike his mother, Nguyen learned from cookbooks: Julia Child’s for French cooking and Marcella Hazan’s for Italian, both of them sources for recipes in a format that we today almost intuitively understand: standardized measures, quite precise and ordered instructions, and assumptions of cooking skill that embrace even the novice cook or baker. Nguyen continues his story: “So on one visit home, I asked my mom to teach me my very favorite Vietnamese dish: hot and sour catfish soup…. What she gave me wasn’t anything I could follow; it was nothing like a recipe at all. It seemed to me, at the time, like this vast and disorganized ramble, a weird organic messy flowchart of possibilities and decision and judgment calls.” After a bout of confusion, Nguyen came to see that in fact his mother had given him a recipe (not, as he curtly said to her, some “Third World bullshit”). The contrast of her “organic messy” recipe and his rigid modern expectation revealed to him some of the effect that modern recipes had on the experience of cooking: “These precise, modern recipes had, in a weird way, disrupted my sense of what cooking was and could be,” he recalls. “I had come to assume that cooking—real cooking—had to proceed via an algorithm. I had refused to accept that real cooking might involve a messy and organic decision space, full of a thousand decision points and judgment calls.”

Before this epiphany, his understanding of “real cooking” had been “value captured”—defined by the rules and regimented modes of modern recipes. (It’s worth knowing that Nguyen was a food writer before he became a philosophy professor at the University of Utah.)

Having seen the effect of modern recipes, Nguyen renewed his understanding of “real cooking.” Read more »

On Usable Rage

by Marie Snyder

How can we possibly approach the world today without being in a constant stage of rage? Philosopher and psychoanalyst Josh Cohen’s All the Rage suggests how to make this feeling more useful to us. He writes from a range of perspectives, everything from political uprisings to the patients in his office, and from how rage plays out in the world to how it manifests in our own minds, all with a thread of climate change activism throughout. Ideas are illustrated with examples from fictional characters, historical figures, and his own family. It hardly seems possible to do all that in just 195 pages, yet the book is a thought-provoking and entertaining read, comfortably shifting from micro to macro issues to explore four kinds of rage. 

SOME DEFINITIONS

In day-to-day conversations, we use “rage,” “anger,” and “aggression” almost interchangeably. We do the same for “emotion” and “feeling” and for “drive” and “instinct.” The book uses these terms more precisely, so a bit of a glossary might be useful. The order of events that occurs when we’re outraged becomes important. Cohen explains that aggression is often the way we respond directly to a stimulus, and anger is what happens after that first spark of action, when we choose to hold it back. He explains it succinctly in an interview with The Philosopher

“Aggression is a kind of stimulus response. It’s what we do with a provocation, which might be an injury; it might be a humiliation, an insult of some kind, something that arouses us to retaliation. Aggression is the way that we get rid of that load of stimulus in action. … Anger is a way of holding on. Feelings are ways of holding on to stuff. When we can’t bear to feel something we instead discharge it in action. …  Anger is something that you’re left with when action is unavailable to you or perhaps when you try to take the experience to a higher level, i.e. to maintain it in the consciousness as something to experience and process psychically rather than discharge in an action. That’s why psychoanalysis tends to think of anger as a human achievement.”

It’s not the case that we’re insulted, then feel anger, and then rationally decide to act or not act, even if it sometimes feels like that. Instead, the impetus to act is immediate following an enraging stimulus, and the restraint is what leads to the feeling of anger. I think that’s the idea. It’s counterintuitive to me, so it’s useful that it was repeated a few times in the book.   Read more »

Wednesday, December 17, 2025

Inferior Men: Donald Trump and the I Ching

by Mark Harvey

The dark power at first held so high a place that it could wound all who were on the side of good and of the light. But in the end it perishes of its own darkness… I Ching, #36, Ming

Ji Chang (King Wen)

Years ago, someone gave me a copy of the I Ching, Book of Changes, translated by Richard Wilhelm. It’s a heavy little brick, more than 700 pages, and bound with a bright yellow cover. When I received the gift, I looked at it skeptically and never expected to read it. To my surprise, it’s been with me ever since, I’ve read it dozens of times, and the spine of the book is sadly broken from too many readings.

I grew up in a family with fairly skeptical parents and some very skeptical siblings. I vividly remember asking my parents if Santa Claus was real at an age when parents should definitely not disillusion a child of that belief. My parents looked at each other with pained expressions and then, too honest to lie about it, tried to let me down gently. So nothing in my formative years prepared me to like the I Ching.

Not all I Chings are equal, and there are some pretty flimsy versions out there. There’s even an app called I Ching Lite. Of the English versions, the Wilhelm/Baynes translation is one of the most respected.

The I Ching is said to be almost 3,000 years old and originated in China’s Zhou Dynasty. The structure consists of six stacked lines (called hexagrams), each either broken or unbroken. You’ll remember from your high school math that if you have two binary options (broken or unbroken) on six lines, you end up with 64 possible combinations. And that’s what the I Ching looks like: 64 hexagrams, each with its own special meaning. Read more »

Tuesday, December 2, 2025

Self and No Self

by Herbert Harris

No Selfie

Many years ago, I began a meditation practice, sparked by curiosity and vague, middle-aged worries about stress and blood pressure. To my surprise, it quickly became a regular part of my life. I restlessly explored many forms of meditation and meditation groups, eventually coming to the San Francisco Zen Center. Before long, I found myself seated on a black cushion in the meditation hall each morning at 5:30. Twenty years later, and 2,500 miles away, I have a much more relaxed schedule, but I am still at it.

What is it like to meditate? This is a question I am constantly asked. Would a philosopher or scientist say there’s a distinct state of consciousness with its own special qualia? I don’t know. Maybe I’ve been doing it wrong, but meditation has never given me an experience that I would call altered consciousness. I’ve come to think the more interesting question is not what meditation feels like moment to moment, but what it is like to be a meditator, to live a life punctuated by these quiet, unremarkable moments of sitting still.

There are many ways our minds can store the details of our experience. We put facts and figures in one place, sensory experiences in another, and skills and procedures in yet another. There is a special kind of memory, called episodic memory, that holds not just the information about an event, but also a sense of our being there. Recalling episodic memories gives us a faint sense of time travel. These are the memories we can reinhabit. We remember a beach vacation as if we can feel the warm sand between our toes, hear the gulls above, and sense the light breeze on our skin. They have a lived-through quality, a presence that feels like “me.”

I have a torrent of episodic memories from my time in San Francisco, where I had just started a new job. I felt like a tourist; every street, every café, every meeting at the new company introduced a parade of unfamiliar faces. My memory was overloaded with experiences and sensations. It felt like my life had entered a new incarnation, complete with a new cast of characters I had to learn. As I stepped into a new role, I became, to some extent, a different person as I adapted to meet new duties and responsibilities. I was surrounded by people who each had hopes and expectations that I would be a good employee, a respectable colleague, and a friend. These hopes and expectations exerted palpable influences on my sense of self.

In the meditation hall, expectations were few. Read more »

Wednesday, November 5, 2025

An Embodied Mathematics

by Herbert Harris

The Stepped Reckoner, a calculating machine invented by philosopher and mathematician Gottfried Wilhelm Leibniz in 1674

Is mathematics created or discovered? For over two thousand years, that question has puzzled philosophers and mathematicians alike. In Plato’s Meno, Socrates encourages an uneducated boy to “discover” a geometrical truth simply by answering a series of guided questions. To Plato, this demonstrated that mathematical knowledge is innate, that the soul recalls truths it has always known. The intuitionists of the early twentieth century, however, rejected this idea of eternal forms. For thinkers like Poincaré and Brouwer, mathematics was not revelation but construction: an activity of the human mind unfolding in time.

The debate continues today in an unexpected new arena. As artificial intelligences start to generate proofs, conjectures, and even entire branches of formal reasoning, we are prompted to ask again: what does it mean to do mathematics? Current systems excel at symbol manipulation and pattern matching, but are they truly thinking in any meaningful way, or just rearranging signs? The deeper question is how humans do mathematics. What happens in the brain when a mathematician recognizes a pattern, intuitively sees a relation, or invents a new kind of number?

In what follows, I’ll trace that question from ancient philosophy to modern neuroscience and then to the newest foundations of mathematics. We’ll see that mathematical invention may be the natural expression of the brain’s recursive, embodied intelligence, and that this perspective could transform how we think about both mathematics and AI. Read more »

Wednesday, October 8, 2025

The Alien Mirror: Humanizing Artificial Intelligence

by Herbert Harris

Humanistic AI

Artificial intelligence has emerged not as a single technology but as a civilization-transforming event. Our collective response has predictably polarized between apocalyptic fears of extinction and utopian dreams of abundance. The existential risks are real. As AI systems become increasingly powerful, their inner workings become increasingly opaque to their creators. This raises very reasonable fears about our ability to control them and avoid potentially catastrophic outcomes. However, between apocalypse and utopia, there may be a subtler and perhaps equally profound danger. Even if we navigate the many doomsday scenarios that confront us, the same opacity that makes AI potentially dangerous also threatens to undermine the foundations of humanism itself.

The dangers of AI are often perceived as disruptions and displacements that will temporarily shake up the workforce and the economy. These changes are significant losses, but we have overcome greater challenges in the past. Copernicus removed us from the center of the universe; Darwin took away our biological uniqueness; Freud showed we are not masters of our own minds. Each revolution has both humbled and enriched humanity, opening new ways to understand what it means to be human.

People will still exist, but who will we be when machines surpass doctors, teachers, artists, and philosophers, not in some distant future, but within our lifetimes? AI tutors already offer more personalized instruction than most classrooms. Diagnostic models outperform radiologists on complex scans. Generative systems produce vast amounts of art, music, and text that are indistinguishable from human work. None of this is inherently harmful. Students might learn more, patients might be diagnosed earlier, and art could thrive in abundance. However, the roles that once carried social meaning and usefulness risk becoming merely decorative. Dehumanization does not necessarily mean extinction; it can mean the loss of purpose and self-worth. Read more »

Sunday, September 21, 2025

Three Keys of Friendship, with Aristotle as Guide

by Gary Borjesson

The happiest, most fulfilled moments of my life have been when I was completely aware of being alive, with all the hope, pain, and sorrow that entails for any mortal being. —Jenny Odell

Applied Philosophy
Back when I was a professor, I loved teaching intro to philosophy courses. Philosophy’s essence comes alive when working with people whose view of themselves and the world is still open and underway. One of the texts I used was Aristotle’s timeless exploration of friendship in the Nicomachean Ethics. I hoped it would win the students over, showing them how interesting and practically minded philosophy could be. We spent a month exploring what’s known about friendship, how it is known, and why it matters.

We don’t have a month together, but think of this essay as an invitation to go deeper into this most familiar of subjects. I offer three big things worth knowing about friendship. First, it’s worth knowing why friendship matters, why we agree with Aristotle when he says that ‘even if we had all the other goods in life, still no one would want to live without friends.’ That friendship matters is obvious, which is why the current crisis in friendship (especially among men) is getting so much attention. Second, it’s worth knowing that there are three kinds of friendship; recognizing these can shed light on how our own friendships work. Finally, it’s worth knowing that friendship and justice go hand in hand. This may seem obvious; after all, when is it ever friendly to be unjust? Nevertheless, the implications of this provoked students, and no doubt will provoke some readers.

1. Why Friendship Matters: It Empowers and Enlivens
Aristotle opens his discussion of friendship by remarking that friendship—‘when two go together’—makes us more able to think and act. He’s alluding to a famous passage from the Iliad, where war-like Diomedes volunteers for a dangerous spying mission behind enemy lines, saying

But if some other man would go with me,
my confidence and mood would much improve.
When two go together, one may see
the way to profit from a situation
before the other does. One man alone
may think of something, but his mind moves slower.
His powers of invention are too thin.

Diomedes chooses resourceful Odysseus as his companion: “If I go with him, we could emerge from blazing fire and come home safe, thanks to his cleverness.”

This is our song as social animals, that by going together we are safer and our prospects for a good outcome improved. In evolutionary terms, friendship is empowering because cooperation is a non-zero-sum game that confers a greater-than-the-sum-of-the-parts power to the friends. Having friends is a means of better adapting to the world.

But friends aren’t just an empowering means to an end, they can also be an enlivening end in their own right. Read more »

Tuesday, September 9, 2025

Psychoanalysis 2.0

by Herbert Harris

I began my psychiatric training in 1990, the year that marked the start of a program called the “Decade of the Brain.” This was a well-funded, high-profile initiative to promote neuroscience research, and it succeeded spectacularly. New imaging techniques, molecular insights, and psychopharmacological discoveries transformed psychiatry into a vibrant biomedical science. The program brought thousands of careers, including my own, into the neurosciences.

Despite its progress, the Decade of the Brain also widened an existing rift. This was the large gap between the psychoanalytic tradition and the biological sciences. The divide wasn’t new. Freud started with neurology but shifted to psychoanalysis when the brain sciences of his time couldn’t fully explain the complexities of the mind. For the first half of the 20th century, psychoanalysis was the main way to understand mental illness. Then, in the 1950s, new psychiatric drugs appeared: chlorpromazine for psychosis, lithium for mood disorders, and antidepressants for depression. For the first time, it seemed possible to treat mental illness by directly targeting the brain, rather than long-term therapy or institutional care.

By the time I was a resident, these diverging traditions had opened into a chasm. On one side was biological psychiatry, focused on neurotransmitters, neuroimaging, and cognitive-behavioral treatments, with outcomes that could be measured and tested. On the other side were psychoanalysis and its branches: attachment theory, object relations, and the investigation of unconscious conflicts through language, narrative, and symbolism. They had become separate languages, spoken within distinct professional communities, each wary of the other. There were occasional efforts at rapprochement, but little sustainable progress. By the end of the Decade of the Brain, reconciliation seemed almost impossible. I was fortunate to be in a training program that had a research track, allowing me to work in a lab, but I also had mentors who were distinguished analysts. It was like being in two different residencies.

Both have proven valuable over the years, but I never expected them to converge. However, today, circumstances appear to be shifting. A merging of neurobiology, computational neuroscience, and neuroimaging has created a new paradigm: active inference. For the first time, we can start to identify strong links between analytic models of the mind and biological models of the brain. Read more »

Thursday, August 14, 2025

The Social Origin of Free Will

by Herbert Harris

In 2023, Stanford neuroscientist Robert Sapolsky’s bestseller ‘Determined’ declared free will to be a complete illusion. Sapolsky gathered a wealth of data from neuroscience to quantum mechanics in an effort to deliver a final knockout blow to our intuitive ideas of freedom. He then explores a wide range of ethical and social issues where our questionable notions of freedom have led to misguided and often inhumane policies and practices. Since its publication, the book continues to attract criticism for its deterministic stance. Experts from many fields have engaged in this lively discussion.

Watching the debate unfold shows that while we have strong ideas about what free will isn’t, we lack a clear understanding of what freedom actually could be. We agree that whatever it is, it would be incompatible with both mechanical determinism and total randomness. We also think that free will is connected to that vaguely self-conscious feeling that we are the originators of our actions. If free will exists, it probably exists in a middle ground that isn’t too deterministic nor too random. But what exactly is it?

Neuroscience may not be as conclusive about the end of free will as Sapolsky suggests, but it has not been particularly effective in producing alternative explanations. Science generally depends on deterministic approaches — such as reproducible experiments — to test hypotheses that can be proven false. It might be, as critics like Jessica Riskin argue, that science is the wrong place to seek an understanding of free will. However, a potential way forward could come from an unexpected source. In the early nineteenth century, philosopher G.W.F. Hegel developed a theory of freedom, defining it as a result of human social interactions. Read more »

Thursday, April 24, 2025

What Is the Game?

by Jerry Cayford

I listened some weeks ago to a terrific discussion between Ezra Klein and Fareed Zakaria. And it really was terrific. They were both at the top of their game, doing a certain thing at a very high level. Still, I have a slight bias against both these guys, and complicated feelings about what they do so well. My pleasure in their intelligence was tinged with frustration that they aren’t better, and with a slight melancholy about the path not taken. Critique mixed with autobiography. I was supposed to be them.

I knew early on that my father did not aspire for me to be president, like other boys’ fathers did, but rather to be the president’s closest adviser. I was supposed to grow up to be McGeorge Bundy—to pick a name from when I was first imbibing this career plan—I was supposed to become Jake Sullivan, to pick someone recent. And if I did not make it quite that close to the seat of power, well, I was still supposed to be Ezra Klein or Fareed Zakaria or some other talented policy analyst, saving the world through the practice of intellectual excellence.

What Klein and Zakaria practice are “the critical and analytical skills so prized in America’s professional class,” to use a phrase from an article of a couple decades ago unearthed by Heather Cox Richardson—another excellent practitioner—about the Bush administration’s replacement of critical and analytical skills with faith and gut instinct. Richardson recalls a passage from that article strikingly suggestive of Donald Trump’s current administration:

These days, I keep coming back to the quotation recorded by journalist Ron Suskind in a New York Times Magazine article in 2004. A senior advisor to President George W. Bush told Suskind that people like Suskind lived in “the reality-based community”: they believed people could find solutions based on their observations and careful study of discernible reality. But, the aide continued, such a worldview was obsolete. “That’s not the way the world really works anymore…. We are an empire now, and when we act, we create our own reality. And while you’re studying that reality—judiciously, as you will—we’ll act again, creating other new realities, which you can study too, and that’s how things will sort out. We’re history’s actors…and you, all of you, will be left to just study what we do.”

In our era of conspiracy theories, fake news, and relentless lies, this rejection of the “reality-based community” sounds like a conservative confession of contempt for truth. It is easy to mock, and both Suskind and Richardson don’t hold back.

And yet, beneath the Bush official’s unfortunate phrasing is a point that is basically right. Read more »

Wednesday, March 12, 2025

On Achieving Tennisosity

by Scott Samuelson

Though I’m at best a mediocre tennis player, I’ve achieved something in the sport that the pros achieve only at their finest, which I’ve taken to calling “tennisosity,” a hybrid of “tennis” and “virtuosity.” I coined the term several years ago, in the sweaty aftermath of a match in which my opponent and I had entered into its state. Among my friends and family, the ugly term tennisosity has stuck—I suspect because it describes something vitally yet elusively important, something with an ethical and an aesthetic dimension that can apply to any meaningful human activity.

This is NOT a picture of my backhand.

Tennisosity (in the realm of tennis) is when you and your opponent are so well-matched that the competition not only raises both of your play to a higher level but perfectly realizes the game of tennis. The way I put it to my exhausted opponent was, “We just played the 2008 Wimbledon Final”—the battle between Roger Federer and Rafael Nadal, sometimes called the greatest match ever, where Nadal won his first Wimbledon against the defending champ (Federer had won the previous five straight, including the last two against Nadal). Even though neither my opponent nor I could return the serve of your average high school varsity tennis player, our rallies were just as dramatic as Federer’s and Nadal’s, each of us got to just as many shots that the other didn’t think could be gotten to, our aces were just as glorious, and our double faults were just as tragic—at least within the context of our game.

I admit that my backhand isn’t at the level of Roger Federer’s! I’m giggling at even drawing the comparison. By the normal metrics and standards of excellence in tennis, everything about my game is junk. Also, there was no public recognition riding on my match—not even the chintzy trophy of a local tournament, much less the engraved silver of the most storied competition in tennis. In fact, nobody was watching.

And yet, had others been watching us, I believe that their aesthetic experience of our tennis match would have been similar in kind to the great Wimbledon Final—obviously not concerning our individual skillsets but concerning what the interactive combination of our skillsets involved. My opponent and I had to dig deep again and again. The same kind of grit and imagination Federer and Nadal had to draw on, we had to draw on. The glory of tennis was on display for all to see—even though nobody happened to be there to see it. If I remember right, one of us cried. Read more »

Tuesday, February 11, 2025

What Natural Intelligence Looks Like

by Scott Samuelson

Jusepe de Ribera. Touch. c. 1615, oil on canvas. Norton Simon Museum, Pasadena. Check out the enlarged image here.

When we conjure up what thinking looks like, what tends to leap to mind is an a-ha lightbulb or a brow-furrowed chin scratch—or the sculpture The Thinker. While there’s something deservedly iconic about how Rodin depicts a powerful body redirecting its energies inward, I think that the most insightful depictions of thinking in the history of art are found in the work of Jusepe de Ribera (1591-1652), a.k.a. José de Ribera or Lo Spagnoletto (The Little Spaniard). In a time when we’re alternatively fascinated and horrified by what artificial intelligence can do, even to the point of wondering whether AIs can think or be treated like people, it’s worth asking some great Baroque paintings to remind us of what natural intelligence is.

Early in his artistic career, Ribera went to Rome and painted a series on the senses. Only four of the original five paintings survive (we suffer Hearing loss). Touch, the most interesting of the remaining paintings, depicting a blind man feeling the face of a sculpture, launches a crucial theme throughout Ribera’s work.

Let’s try to imagine Ribera in the process of making this painting. He looks at live models, probably at an actual blind man. He studies prints, sketches, fusses with his paints, maybe takes a walk. He sleeps on it. He chats with a friend and lights on an approach: a blind man exploring a sculpture by feeling it. He hurries back to his studio and begins to paint. He notices more about his subject, makes a mistake, fixes it. He holds up a jar of paint—no, that one would be better. Somewhere in this process, I imagine, it dawns on him that he’s doing the same thing as the blind man. (Maybe this is why he decides to put the painting on the table—though the painting is also a powerful visual reminder for us that there are always limits to our engagement with the world.)

Regardless of what actually went through Ribera’s head, the point I’m trying to make has been illustrated—both figuratively and literally—by a contemporary artist. In the 1990s Claude Heath was sick of the ideas of beauty that governed his artistic work. So, he lit on the idea of drawing a plaster cast of his brother’s head—blindfolded. Using small pieces of Blu-tack for orientation, one stuck into the top of the cast, one into his piece of paper, he felt the head’s contours with his left hand and drew corresponding lines with his right. “I tried not to draw what I know, but what I feel . . . I created a triangle, if you like, between me, the object, and the drawing . . . It was a bit of a transcription.” He didn’t look at what he was doing until he was finished. By liberating himself from ideas of beauty, he made beautiful drawings. Read more »

Friday, October 18, 2024

Becoming What We Are: Authenticity as a Practice

by Gary Borjesson

Become what you are, having learned what that is. —Pindar

[To protect their privacy, I have changed identifying details of those mentioned here.]

Aristotle

What do we want for our lives? It’s a peculiarly human question; other animals don’t appear to be worrying about it. I’ve asked myself this question, sometimes with curiosity, sometimes more desperately, for as long as I can remember. I’m always moved when patients raise it in their therapy. A man who retired from a successful career said that when he looks into the future without the mantle of his professional title and status, he feels empty and lost, ashamed that at 70 he doesn’t know what he wants.

Sometimes we raise the question ourselves; sometimes the world raises it for us. Another patient, whose boyfriend just “dumped” her, is wrestling with her alcohol use. The men she wants in her life don’t want an alcoholic in theirs. She’s angry at the thought of sobering up for someone else, “Wouldn’t that be inauthentic?” At the same time, she (authentically) wants a partner in her life.

She knows what most of us know, that we want to be authentic. By “authenticity” I mean living in a way that is true to oneself and to one’s situation in the world. (For the bigger philosophic picture, see my previous column, Reclaiming Authenticity as an Ethical Ideal.) Authenticity resonates because it is that rare thing, an ideal that most of us embrace—despite our divergent religious, ethnic, social, and political values. After all, each of us faces (or not) the question of how to become our best selves.

Although we must ask and answer that question for ourselves, I will suggest a few core principles that can guide our way. I’ll start with Aristotle’s view, that the one thing we all want from life is to flourish, which means living in such a way as to be fulfilling our nature. This might sound about as helpful as telling someone who is struggling, “Just be true to yourself!” How do we even know what our true self is? If we’re a lonely alcoholic, is our true self more of the same, or is it sober and in a relationship?

We can find some guidance by unpacking two principles of flourishing that extend to living authentically. Read more »

Thursday, August 22, 2024

Path and Pathology: Some Philosophic Aspects of Psychotherapy

by Gary Borjesson

I came to psychotherapy from philosophy, first starting therapy in my forties while on sabbatical from St. John’s College. I was struck by its transformative power—so struck that I ultimately resigned my tenure and returned to graduate school to train as a therapist. But I’ve hardly left philosophy behind. Freud reminds me of Nietzsche. Socrates’ fingerprints are all over the motives and methods of psychoanalysis. Donald Winnicott and Erik Erikson bring to mind Hegel, and the list goes on.

Philosophy and psychotherapy (and the humanist tradition in general) see our lives as developmental journeys. In the spirit of Socrates, they view self-exploration and self-awareness as essential to self-actualization. This may seem obvious, but it’s easy to lose sight of. Which makes it remarkable that many academics don’t believe being a “philosopher” need include examining themselves. Yet, how could it not? After all, philosophy means the love of wisdom, and who would say of a true philosopher what Regan said of her father, King Lear, that “he hath ever but slenderly known himself.”

It’s equally remarkable that the Socratic spirit is often absent in therapists, in their own lives and in their work with clients. A variety of forces (not least insurance companies) lead many therapists and clients to focus on techniques and tools for reducing symptoms; this draws attention away from the person as a whole. There is nothing wrong with focusing on symptom-relief, as the advertised “evidence-based” “solution-focused” therapies like CBT do. After all, people vary in what they want and need from therapy, so we should welcome experimentation and a variety of approaches.

That said, if therapy is to encourage deeper self-exploration, it needs to go beyond symptoms to the whole person suffering them. Read more »

Monday, July 8, 2024

The Absent Self

by Christopher Horner

Insist on your self; never imitate. —Emerson

How can a man of consciousness have the slightest respect for himself? —Dostoevsky

The key promise of the modern world was the freedom of the individual. It was the motivating cry of the great revolutions of the modern age, meaning two things, at least: first, the removal of the external barriers to freedom: no more oppression by kings and priests, and later, freedom from the democratic masses themselves: the ‘tyranny of the majority’. Second, freedom as the ability to be oneself, to express who one truly is; the ideal of authenticity. The free, unique individual, able at last to to express their unique self. But this ‘real’ self needs to be found in order to be freed, and this has proved to be more difficult than the removal of oppressive rulers.

Authenticity

The authentic self is hard to reach. Something keeps getting in the way. Perhaps the culprit is an inauthentic self, a mask or double woven by social convention, and adopted through self deception. So one becomes two, or perhaps three. The alienated self must discard the false in order to find the True Self. The great task for moderns is to be authentic and unique.

That this should seem natural to us may be because we have been shaped by the brave new world of bourgeois freedom that followed the Age of Revolutions. Mill, Constant, de Tocqueville, Emerson, all have it for their theme, which was also that of much romantic art of the period. Here is Ralph Waldo Emerson, in his essay Self Reliance:

Whoso would be a man, must be a nonconformist. He who would gather immortal palms must not be hindered by the name of goodness, but must explore if it be goodness. Nothing is at last sacred but the integrity of our own mind. [1]

Emerson continues in this vein at some length, in a high flown peroration. He refers repeatedly to the evil effects of the crowd, the multitude the mass of men (it’s always men) who threaten to suffocate the genius of the individual. This can seem like tedious over insistence. It still finds an audience, especially in the self-help and get-ahead-in-business circles that dream of the remarkable person who achieves success, by liberating their unique self with all its talents.  Read more »

Monday, May 27, 2024

The Large Language Turn: LLMs As A Philosophical Tool

by Jochen Szangolies

The schematic architecture of OpenAI’s GPT models. Image credit: Marxav, CC0, via Wikimedia Commons

There is a widespread feeling that the introduction of the transformer, the technology at the heart of Large Language Models (LLMs) like OpenAI’s various GPT-instances, Meta’s LLaMA or Google’s Gemini, will have a revolutionary impact on our lives not seen since the introduction of the World Wide Web. Transformers may change the way we work (and the kind of work we do), create, and even interact with one another—with each of these coming with visions ranging from the utopian to the apocalyptic.

On the one hand, we might soon outsource large swaths of boring, routine tasks—summarizing large, dry technical documents, writing and checking code for routine tasks. On the other, we might find ourselves out of a job altogether, particularly if that job is mainly focused on text production. Image creation engines allow instantaneous production of increasingly high quality illustrations from a simple description, but plagiarize and threaten the livelihood of artists, designers, and illustrators. Routine interpersonal tasks, such as making appointments or booking travel, might be assigned to virtual assistants, while human interaction gets lost in a mire of unhelpful service chatbots, fake online accounts, and manufactured stories and images.

But besides their social impact, LLMs also represent a unique development that make them highly interesting from a philosophical point of view: for the first time, we have a technology capable of reproducing many feats usually linked to human mental capacities—text production at near-human level, the creation of images or pieces of music, even logical and mathematical reasoning to a certain extent. However, so far, LLMs have mainly served as objects of philosophical inquiry, most notable along the lines of ‘Are they sentient?’ (I don’t think so) and ‘Will they kill us all?’. Here, I want to explore whether, besides being the object of philosophical questions, they also might be able to supply—or suggest—some answers: whether philosophers could use LLMs to elucidate their own field of study.

LLMs are, to many of their uses, what a plane is to flying: the plane achieves the same end as the bird, but by different means. Hence, it provides a testbed for certain assumptions about flight, perhaps bearing them out or refuting them by example. Read more »

Monday, April 22, 2024

The Irises Are Blooming Early This Year

by William Benzon

I live in Hoboken, New Jersey, across the Hudson River from Midtown Manhattan. I have been photographing the irises in the Eleventh Street flower beds since 2011. So far I have uploaded 558 of those photos to Flickr.

I took most of those photos in May or June. But there is one from April 30, 2021, and three from April 29, 2022. I took the following photograph on Monday, April 15, 2024 at 4:54 PM (digital cameras can record the date and time an image was taken). Why so early in April? Random variation in the weather I suppose.

Irises on the street in Hoboken.

That particular photo is an example of what I like to call the “urban pastoral,” I term I once heard applied to Hart Crane’s The Bridge.

Most of my iris photographs, however, do not include enough context to justify that label. They are just photographs of irises. I took this one on Friday, April 19, 2024 at 3:23 PM. Read more »

Monday, April 8, 2024

Third Places and American Libraries

by Mark Harvey

Don’t join the book burners. Don’t think you are going to conceal faults by concealing evidence that they ever existed. Don’t be afraid to go in your library and read every book…  —President Dwight Eisenhower, 1953

Andrew Carnegie

The other day I stopped in at one of those coworking spaces to see if it would be worth joining in an effort to increase my productivity. Productivity, in my case, is a fancy word to describe getting my taxes done on time, answering a few emails, staying atop some small businesses, and doing a little writing. I’m not exactly a threat to mainland China.

Unfortunately the place I visited had all the charm of a gulag in far east Russia, with poor lighting, and about four pale characters staring at their computer screens as if they could see the eternal void in the universe and had a longing to visit. No thanks.

It did get me thinking about “third places,” and libraries in particular. I believe the term, third place, was coined by the writer Ray Oldenburg in his book The Great Good Place. First places are our homes, second places are where we work, and third places are where we go to get relief from the first and second places. They include churches, libraries, pubs, cafes, parks, gyms, and clubs.

My second place is a beautiful ranch in Colorado, so I have little to complain about, but when it comes to the close work of being on a computer, I really value third places. Scholars have described Oldenburg’s third place as having eight features including neutrality, leveling qualities, accommodation, a low profile, and a sense of home. In short, it’s a place that is welcoming, not fancy, free of social hierarchies, free of dues, and imparts no obligation to be there. That perfectly describes American libraries, one of our greatest institutions. Read more »

Monday, March 18, 2024

The Vegetarian Fallacy

by Jerry Cayford

Atelier ecosystemes des communs, Alima El Bajnouni, CC BY-SA 4.0, via Wikimedia Commons

The Vegetarian Fallacy was so dubbed by philosophy grad students in a well-oiled pub debate back in the 1980s. There is a fundamental conflict—so the argument went—between vegetarians and ecologists. The first principle of ecology—everything is connected to everything else (Barry Commoner’s first law)—is incompatible with the hands-off, “live and let live” ideal implicit in ethical vegetarianism. The ecologists took the match by arguing that, pragmatically, animals either have a symbiotic role in human life or else they compete with us for habitat, and those competitions go badly for the animals. In the long run, a moral stricture against eating animals will not benefit animals.

Now, pub debates are notoriously broad, and this one obviously was. A swirl of issues made appearances, tangential ones like pragmatism versus ethics, and central ones like holism versus atomism. In the end—philosophers being relatively convivial drinkers—all came to agree that pragmatism and ethics must be symbiotic as well, and that the practice of vegetarianism (beyond its ethical stance) could be more holistically approached and defended. Details, though, are fuzzy.

A fancy capitalized title like “Vegetarian Fallacy” may seem a bit grandiose, given the humble origins I just recounted. What justifies a grand title is when the bad thinking in a losing argument is also at work far beyond that one dispute. And that is my main thesis. So, although I will elaborate the two sides, it will be only a little bit. I am more interested in the mischief the Vegetarian Fallacy is perpetrating not in the academy but in wider political and cultural realms. Read more »