Monday Musing: The Palm Pilot and the Human Brain, Part III

Part III: How Brains Might Work, Continued…

180pxpalmpilot5000_2In Part I of this twice-extended column, I tried to explain how it is that very complex machines such as computers (like the Palm Pilot) are designed and built by using a hierarchy of concepts and vocabularies. I then used this idea to segue into how attempts to understand the workings of the brain must reverse-engineer the design which has been provided by natural selection in that case, and in Part II, began a presentation of an interesting new theory of how the brain works put forth in his book On Intelligence by the inventor of the Palm Pilot, Jeff Hawkins, who is also a respected neuroscientist. Today, I want to wrap up that presentation. While it is not completely necessary to read Part I to understand what I will be talking about today, it is necessary to read at least Part II. Please do that now.

Last time, at the end of Part II, I was speaking of what Hawkins calls invariant representation. This is what allows us, for example, to recognize a dog as a dog, whether it is a great dane or a poodle. The idea of “dogness” is invariant at some level in the brain, and it ignores the specific differences between different breeds of dog, just as it would ignore the specific differences in how the same individual dog, Rover say, is presented to our senses in different circumstances, and would recognize it as Rover. Hawkins points out that this sense of invariance in mental representation has been remarked for some time, and even Plato’s theory of forms (if stripped of its metaphysical baggage) can be seen as a description of just this sort of ability for invariant representation.

This is not just true for the sensory side of the brain. The same invariant representations are present at the higher levels of the motor side. Imagine signing your name on a piece of paper on a two inch wide space. Now imagine signing your name on a large blackboard so that your signature sprawls several feet across it. Despite the fact that completely different nerve and muscle commands are used at the lower levels to accomplish the two tasks (in the first case, only your fingers and hand are really moving while in the second case those parts are held still while your whole arm and other parts of your body move), the two signatures will look very much the same, and could be easily recognized as your signature by an expert. So your signature is represented in an abstract way somewhere higher up in your brain. Hawkins says:

Memories are stored in a form that captures the essence of relationships, not the details of the moment. When you see, feel, or hear something, the cortex takes the detailed, highly specific input and converts it to an invariant form.It is the invariant form that is stored in memory, and it is the invariant form of each new input pattern that it gets compared to. Memory storage, memory recall, and memory recognition occur at the level of invariant forms. There is no equivalent concept in computers. (On Intelligence, p. 82)

We’ll be coming back to invariant representations later, but first some other things.

PREDICTION

Jeff_hawkins_on_stageImagine, says Jeff Hawkins, opening your front door and stepping outside. Most of the time you will do this without ever thinking about it, but suppose I change some small thing about the door: the size of the doorknob, or the color of the frame, or the weight of the door, or I add a squeak to the hinges (or take away an existing squeak). Chances are you’ll notice right away. How do you do this? Suppose a computer was trying to do the same thing. It would have to have a large database of all the door’s properties, and would painstakingly compare every property it senses with the whole database, but if this is how our brains did it, then, given how much slower neurons are than computers, it would take 20 minutes instead of the two seconds that it takes your brain to notice anything amiss as you walk through the door. What is actually happening at all times at the lower level sensory portions of your brain is that predictions are being made about what is expected next. Visual areas are making predictions about what you will see, auditory areas about what you will hear, etc. What this means is that neurons in your sensory areas become active in advance of actually receiving sensory input. Keep in mind that all this occurs well below the level of consciousness. These predictions are based on past experience of opening the door, and span all your senses. The only time your conscious mind will get involved is if one or more of the predictions are wrong. Perhaps the texture of the doorknob is different, or the weight of the door. Otherwise, this is what the brain is doing all of the time. Hawkins says the primary function of the brain is to make predictions and this is the foundation of intelligence.

Even when you are asleep the brain is busy making its predictions. If a constant noise (say the loud hum of a bad compressor in your refrigerator) suddenly stops, it may well awaken you. When you hear a familiar melody, your brain is already expecting the next notes before you hear them. If one note is off, it will startle you. If you are listening to a familiar album, you are already expecting the next song as one ends. When you hear the words “Please pass the…” at a dinner table, you simultaneously predict many possible words to follow, such as “butter,” “salt,” “water,” etc. But you do not expect “sidewalk.” (This is why a certain philosopher of language rather famously managed to say “Fuck you very much” to a colleague after a talk, while the listener heard only the expected thanks.) Remember, predictions are made by combining what you have experienced before with what you are experiencing now. As Hawkins puts it:

These predictions are our thoughts, and, when combined with sensory input, they are our perceptions. I call this view of the brain the memory-prediction framework of intelligence. (Ibid, p. 104)

HOW THE CORTEX WORKS

Let us focus on vision for a moment, as this is probably the best understood of the sensory areas of the brain. Imagine the cortex as a stack of four pancakes. We will label the bottom pancake V1, the one above it V2, the one above that V4, and the top one IT. This represents the four visual regions involved in the recognition of objects. Sensory information flows into V1 (over one million axons from your retinas feed into it), but information also flows down from regions to the one below. While parts of V1 correspond to parts of your visual field in the sense that neurons in a part of V1 will fire when a vertain feature (say an edge) is present in a certain part of the retina, at the topmost level, IT, there are cells which become active when a certain object is anywhere in your visual field. For example, a cell may only fire if there is a face present anywhere in your visual field. This cell will fire whether the face is tilted, seen at an angle, light, dark, whatever. It is the invariant representation for “face”. The question, obviously, is how to get from the chaos of V1 to the stability of the representation at the IT level.

The answer, according to Hawkins, lies in feedback. There are as many or more axons going from IT to the level below it, as there are in the upward direction (feedforward). At first people did not pay much attention to these feedback connections, but if you are going to be making predictions, then you are going to have to have axons going down, as well as up. The axons going up carry information on what you are seeing, while the axons going the other way carry information on what you expect to see. Of course, exactly the same thing occurs in all the sensory areas, not just vision. (There are also association areas even higher up which connect one sense to another, so that, for example, if I hear my cat meowing and the sound is approaching from around the corner, then I expect to see it in the next instant.) Hawkins’s claim is that there is a sort of invariant representation at each level of the cortex, of the more fragmented sensory input from the level below. It is only when we get to the levels available to consciousness like IT that we can give these invariant representations easily understood names like “face.” Nevertheless, V2 forms invariant representations of what V1 is feeding it, by making predictions of what should come in next. In this way, each level of cortex develops a sort of vocabulary in terms that are built upon repeated patterns from the layer below. So now we see that the problem was not how to construct invariant representations in IT, like “face,” from the three layers below it. Rather, each layer forms invariant representations based on what comes into them. In the same way, association layers above IT may make invariant representations of objects based on the input of multiple senses. Notice that this also goes along well with Mountcastle’s idea that all parts of the cortex basically do the same thing! (Keep in mind that this is a simplified model of vision, ignoring much complexity for the sake of for expository convenience.)

In other words, every single cortical region is doing the same thing: it is learning sequences of patterns coming in from the layer below and organizing them into invariant representations that can be recalled. This is really the essense of Hawkins’s memory-prediction framework. Here’s how he puts it:

Each region of cortex has a repertoire of sequences it knows, analogous to a repertoire of songs… We have names for songs, and in a similar fashion, each cortical region has a name for each sequence it knows. This “name” is a group of cells whose collective firing represents the set of objects in the sequence… These cells remain active as long as the sequence is playing, and it is this “name” that gets passed up to the next region in the hierarchy. (Ibid. p. 129)

This is how greater and greater stability is created as we move up in the hierarchy, until we get to stages which have “names” for the common objects of our experience, and which are available to our conscious minds as things like “face.” Much of the rest of the book is spent on describing details of how the cortical layers are wired to make all this feedforward and feedback possible, and you should read the book if you are interested enough.

HIERARCHIES AGAIN

As I mentioned six weeks ago when I wrote Part I of this column, complexity in design (whether done by humans or by natural selection) is achieved through hierarchies which build layer upon layer of complexity. Hawkins takes this idea further and says that the neocortex is built as a hierarchy because the world is hierarchical, and the job of the brain, after all, is to model the world. For example, a person is usually made of a head, torso, arms, legs, etc. The head has eyes, a nose, a mouth, etc. A mouth has lips, teeth, and so on. In other words, since eyes and a nose and a mouth occur together most of the time, it makes sense to give this regularity in the world (and in the visual field) a name: “face.” And this is what the brain does.

Have a good week! My other Monday Musing columns can be seen here.



Monday, May 1, 2006

Monday Musing: What Wikipedia Showed Me About My Family, Community, and Consensus

Like a large number of people, I read and enjoy wikipedia. For many subjects on which I need to quickly get a primer, it’s good enough, at least for my purposes. I also just read it to see the ways the articles on some topics expand (such as Buffy the Vampire Slayer), but mostly to see how some issues cease to be disputed over time and congeal (the entries on Noam Chomsky are a case in point), and to witness the institutionalization of what was initially envisioned to be an open and rather boundless form (in fact there’s a page on its policies and guidelines with a link to a page on how to propose policies). For someone coming out of political science, it’s intriguing.

To understand why, just look at wikipedia’s “Official Policy” page.

Our policies keep changing, and their interpretation as well. Hence it is common on Wikipedia for policy itself to be debated on talk pages, on Wikipedia: namespace pages, on the mailing lists, on Meta Wikimedia, and on IRC chat. Everyone is welcome to participate.

While we try to respect consensus, Wikipedia is not a democracy, and its governance can be inconsistent. Hence there is disagreement between those who believe rules should be explicitly stated and those who feel that written rules are inherently inadequate to cover every possible variation of problematic or disruptive behavior.

In either case, a user who acts against the spirit of our written policies may be reprimanded, even if no rule has technically been violated. Those who edit in good faith, show civility, seek consensus, and work towards the goal of creating a great encyclopedia should find a welcoming environment.

It’s own self-description points to the complicated process, the uncertainties, and tenuousness of forming rules to making desirable outcomes something other than completely random. Outside of the realm of formal theory, how institutions create outcomes, especially how they interact with environmental factors, cultural elements, psychology is, well, one of the grand sets of questions that constitute much of the social sciences. All the more complicating for wikipedia is that fifth key rule or “pillar” is that “wikipedia doesn’t have firm rules”.

Two of these rules or guidelines have worked to create an odd effect. The first is a “neutral point of view”, by which wikipedia (which reminds us that it is not a democracy) means a point of view “that at is neither sympathetic nor in opposition to its subject. Debates are described, represented, and characterized, but not engaged in.” The second is “consensus”. The policy page on “consensus” is short. It largely discusses what “consensus” is not.

“Consensus” is, of course, a tricky concept when flushed out. To take a small aspect, people in agreement need not have the same reasons or reasons of equal force. Some may agree that democracy is a good thing because anything else would require too much time and effort in selecting the smartest, most benevolent dictator, etc., and another may believe that democracy is a good thing because it represents a polity truly expressing a collective and autonomously formed judgment. Sometimes, it means not just agreeing on positions, but also on reasons and the steps between the two. In wikipedia’s case, it seems to consist of reducing debate to “x said”-“y said” pairs and an enervation of issues that are points of deep disagreement.

One interesting consequence has been that the discussion pages, free of the “neutral point of view” and “consensus” requirements, have become sites of contest, often for “cites of contest”. Perhaps more interestingly, they unintentionally demonstrate what can emerge in an open discussion without the neutrality and consensus constraints.

180pxnasrani_menorahjpgI was struck by this possibility a few weeks ago when I was looking up Syrian Orthodox Christians, trying to unearth some information on the relationship between two separate (sub?)denominations of the church. The reason is not particularly relevant and had more to do with curiosity about different parts of my family and the doctrinal and political divides among some of them. (We span Oriental Orthodox-reformed, Oriental Orthodox, and Eastern Catholic sects and it gets confusing who believes what.)

While looking up the various entries on the Syrian Orthodox Church and the Syro-Malabar Catholic Church, I came across a link to an entry on Knanayas. Knanayas are a set of families, an enthic (or is it sub-ethnic?) community within the various Syriac Nasrani sects in South India, and to which I also belong.

The entry itself was interesting, at least to me.

Knanaya Christians are descendants of 72 Judeo-Christian families who migrated from Edessa (or Urfa), the first city state that embraced Christianity, to the Malabar coast in AD 345, under the leadership of a prominent merchant prince Knai Thomman (in English, Thomas of Cana). They consisted of 400 people men, women and children, from various Syriac-Jewish clans…Before the arrival of the Knanaya people, the early Nasrani people in the Malabar coast included some local converts and largely converted Jewish people who had settled in Kerala during the Babylonian exile and after…The Hebrew term Knanaya or K’nanaim, also known as Kanai or Qnana’im, (for singular Kanna’im or Q’nai) means “Jealous ones for god”. The K’nanaim people are the biblical Jews referred to as Zealots (overly jealous and with zeal), who came from the southern province of Israel. They were deeply against the Roman rule of Israel and fought against the Romans for the soverignity of the Jews. During their struggle the K’nanaim people become followers of the Jewish sect led by ‘Yeshua Nasrani’ (Jesus the Nazarene).

Some of history I’d known; other parts such as being allegedly descendants of the Qnana’im, I did not. Searching through the pages on the topics, what struck me most was nothing on the entry pages, but rather a single comment on the discussion pages.180pxkottayam_valia_palli02 It read:

I object to the Bias of this page. We Knanaya are not all Christians, only the Nasrani among us are Christians. Can you please tone down the overtly Christian propaganda on this page and focus more on us as an ethnic group. Thankyou. [sic]

With that line, images of the my family’s community shifted. It also revealed something about the value of disagreement, and not forcing consensus.

Ram, who writes for 3QD, explored multiculturalism, cultural rights, and group conflict in his dissertation. He is fairly critical of the concept and much of the surrounding politics, as I am. Specifically, he doesn’t believe that there are any compelling reasons for using public policy and public effort to preserve a culture, even a minority culture under stress. For a host of reasons, some compelling, Ram believes that minority cultures can reasonably ask for assistance for adjustment, but cannot reasonably ask the rest to preserve their way of life. One which he offers, one with which I agree, is that a community is often (perhaps eternally) riddled with conflicts about the identity, practices and makeup of the community itself. These conflicts often reflect distributions of power and resistance, internal majorities and minorities, and movements for reform and reactions in defense of privilege. Any move by public power to maintain a community is to take a side, often on the side of the majority. (Now, the majority may be right, but it certainly isn’t the role of public power to decide.)

But the multicultural sentiment is not driven by a desire to side with established practices within a community at the expense of dissidents and minorities. Rather, it’s driven by a false idea that there’s more consensus that there is within the group. The image is furthered by the fact that official spokesmen, usually religious figures, are seen as the authoritative figures for all community issues and not merely over religious rites, and by the fact that minorities such as gays and lesbians are labeled as shaped or corrupted by the “outside”. Forced consensus in other areas, I suspect, suffers from similar problems.

When articles on wikipedia were disputed more frequently, the discussion pages were, if anything, more filled with debate. Disputes have not been displaced onto discussions pages; and if they’ve become more interesting, it is only relatively so. Since the 1970s, ever since political philosophy, political theory and the social sciences developed an interest in ideal speech situations, veils of ignorance, and deliberation, there’s been a fetish made of consensus. Certainly, talking to each other is generally better than beating up each other, but the idea of driving towards agreement may be doing a disservice to politics itself. It was for that reason I was quite pleased by the non-Christian Knanya charging everyone else with bias.

Happy Monday and Happy May Day.

Old Bev: Global Warning

A17_h_148_22725_1Issues 1-3 of n+1 feature a section titled “The Intellectual Situation” which “usually scrutinizes the products of our culture and the problems of everyday life.”  (A typical scrutiny, from Issue 2: “A reading is like a bedside visit. The audience extends a giant moist hand and strokes the poor reader’s hair.”) But in Issue 4, out today, the magazine’s editors, worried “that our culture and everyday life may not exist in their current form much longer,” take a break from topics like dating and McSweeney’s and devote the section to “An Interruption”: Chad Harbach’s summary of global warming. It’s a startling essay because, unlike writings on the same subject by researchers, politicians, economists, and scientists, Harbach claims absolutely no personal authority and offers little analysis of the particulars of the situation.  Instead, he’s scared, and thinks you should be too.  And you shouldn’t be scared just of the hurricanes, but of the nice days as well:

Our way of life that used to seem so durable takes on a sad, valedictory aspect, the way life does for any 19th-century protagonist on his way to a duel that began as a petty misunderstanding.  The sunrise looks like fire, the flowers bloom, the morning air dances against his cheeks.  It’s so incongruous, so unfair!  He’s healthy, he’s young, he’s alive – but he’s passing from the world.  And so are we, healthy and alive – but our world is passing from us. 

Harbach longs for the days before he knew what carbon dioxide and methane do to our climate; he doesn’t seem to resent the “way of life that used to seem so durable” as much as he does the fact that he knows it is no longer durable, and is forced to watch it progress.  It’s the coupling of access to knowledge and lack of agency that feeds Harbach’s nightmare.  And the nightmare is compelling because it doesn’t come from a journalist who has gone to the ice caps or a scientist who has gone to the ice caps or a politician who has gone to the ice caps.  It comes from a guy who has read about the ice caps on the internet.  It’s as if the 21st century protagonist has Googled his duel and learned the outcome, but must nevertheless continue on his way, unsure when he’ll meet the opponent. 

Or if he’ll meet him at all.  It takes a minimum of 40 years for some burned fuels to affect the climate, Harbach informs us.  In a sense, we’re living our grandfather’s dreams, and dreaming our granddaughter’s days. Where we, in the present, fit in is murky.  How can emergency rhetoric operate in a discussion that holds its outcomes so far in the future, and its causes so far in the past?  Harbach acknowledges that the “long lag is the feature that makes global warming so dangerous,” but his own warning is urgent, finite, and is positioned by his editors as a brief perforation with no past or future.  The essay’s marked as “An Interruption” in the regular “Intellectual Situation,” signaling both that the content is important enough to warrant the reader’s immediate attention, and that that very attention is transient. In Issue 5, the editors imply, “The Intellectual Situation” will return to its usual treatment of “problems of everyday life.”  What Harbach wants, however, is for Global Warming to be the every day problem.  But what language can convey that, when the warning is always about tomorrow?

Global warming certainly isn’t a practical concern for most Americans.  It’s practical to be concerned about events like hurricanes and tornados and floods, but global warming – whether there will be more hurricanes in the next century than in this one –isn’t enough of a practical concern to make any difference in the voting booth.  Of course, gay marriage certainly isn’t a practical concern for most Americans either.  Most Americans aren’t gay, and I can’t think of a single American who would be practically threatened by a gay marriage.  But the language surrounding the issue – one of tangible emergency, one of assault on today’s family – makes the issue practical.  It suggests that the marriages of heterosexual partners are instantly destabilized and undermined at the moment when same sex partners marry.  Political power is gained, in that case, by constructing immediate personal threat. 

Harbach takes an opposite approach – he tries to construct threat by unleashing a torrent of imagined future problems so awful and so overwhelming that they seem present.  It’s a solid strategy because he executes it so well, but my lasting feeling was selfish – I’ll die before the shit hits the fan.  Environment related language rarely confers personal threat.  Guilt, perhaps, but almost never threat.  Environmental Protection Agency?  The environment doesn’t get scared or vote.  Natural Resources Defense Council?  Natural Resources don’t get mad or donate money.  Voters are selfish, and to issue a call to arms about global warming you’ve either got to convince them to care about the earth, care about their grandchildren, or get them nervous about themselves here and now. Bush exploited this last strategy in his State of the Union address when he warned that “America is addicted to oil,” implying a human weakness and illness that had to be cured, and fast.  Addiction is also a personal subject for the President; he is a born again Christian who kicked his booze habit and can therefore kick oil, too.  He held Americans as today’s victims, not the earth. 

Toward the end of his essay, Harbach addresses the “addiction” to oil: “This [the transition to renewable energy] is the responsibility incumbent on us, and its fulfillment could easily be couched in the familiar, voter-friendly language of American leadership, talent, and heroism.”  It’s true, it could be easily couched that way – but what seems to keep Harbach himself up at night are global warming doomsday scenarios, not American heroism. “Addicted to Oil” plays on these nightmares.  Perhaps it’s time that the NRDC and company did too.

Rx: Harvey David Preisler

The Moving Finger writes; and, having writ,
Moves on: nor all your Piety nor Wit
Shall lure it back to cancel half a line,
Nor all your Tears wash out a Word of it
.

Omar Khayyam

Screenhunter_1_9Harvey died on May 19th 2002, at 3:20 p.m. The cause of death was chronic lymphocytic leukemia/lymphoma. Death approached Harvey twice: once at the age of 34 when he was diagnosed with his first cancer, and after years of living under the shadow of a relapse, when he was over the fear, a second and final time 4 years ago. He met both with courage and grace. In these trials, he showed how a man so enthralled by life can be at peace with death. Harvey did not seek refuge in visions of heaven or a life after death. I only saw him waver once. When in 1996, our daughter Sheherzad developed a high fever and a severe asthmatic attack at the age of two, Harvey’s anxiety was palpable. After hours of taking turns in the Emergency Room, rocking and carrying her little body connected to the nebulizer, as she finally dozed off, he asked me to step outside. In the silence of a hot, still Chicago night, he said in a tormented voice, “If something happens to her I am going to kill myself because of the very remote chance that those fundamentalists are right and there is a life after death. I don’t want the little one to be alone”.

Truth is what mattered most to Harvey. He faced it and accepted it. When I would become upset by the intensely painful nature of his illness, Harvey was always calm and matter of fact, “It’s the luck of the draw, Az. Don’t distress yourself over it for a second”. It was an acceptance of the human condition with quiet composure. “We are all tested. But it is never in the way we prefer, nor at the time we expect.” W. B. Yeats was puzzled by the question:

The intellect of man is forced to choose
Perfection of the life, or of work.

Fortunately for Harvey, it was never a question of either or. For him, work was life. Once, towards the end, when I asked him to work less and maybe do other things that he did not have the time for before, his response was that such an act would make a mockery of everything he had stood for and done until that point in his life. Work was his deepest passion outside of the family. Three days before he died, Harvey had a lab meeting at home with more than 20 people in attendance, and he went over each individual’s scientific project with his signature genuine interest and boyish enthusiasm. Even as he clearly saw his own end approach, Harvey was hopeful that a better future awaits other unfortunate cancer victims through rigorous research.

Harvey grew up in Brooklyn and obtained his medical degree from the University of Rochester. He trained in Medicine at New York Hospitals, Cornell Medical Center, and in Medical Oncology at the National Cancer Institute. At the time of his death, he was the Director of the Cancer Institute at Rush University in Chicago and the Principal Investigator of a ten million dollar grant from the National Cancer Institute (NCI) to study and treat acute myeloid leukemias (AML), in addition to several other large grants which funded his research laboratory with approximately 25 scientists entirely devoted to basic and molecular research. He published extensively including more than 350 full-length papers in peer reviewed journals, 50 books and/or book chapters and approximately 400 abstracts.

Harvey loved football with a passion that was only matched by mine for poetry. He was exceedingly anti-social and worked actively to avoid company while I had a considerable social circle and was almost always surrounded by friends and extended family. If you saw the two of us going out to dinner, you would have been confused; I looked dressed for a dinner at the White House while Harvey could have been taking the trash out. We met in March 1977 and did not match in age (I was 24, he was 36), status (I was single and a fresh medical graduate waiting to start my Residency, he was married with three children and the Head of the Leukemia Service), or religion (I was a Shia Muslim, he came from an Orthodox Jewish family, and his grandfather was a Rabbi). Yet, we shared a core set of values that made us better friends than we had ever been with another soul.

Harvey liked to tell a story about his first scientific experiment. He was four years old, living in Brooklyn, and went to his backyard to urinate. To his surprise, a worm emerged from the little puddle. He promptly concluded that worms came from urine. In order to prove his hypothesis, he went back the next day and repeated the experiment. To his satisfaction, another worm appeared from the puddle just as before, providing reproducible proof that worms came from urine, a belief he steadfastly hung on to until he was nine years old. An interesting corollary is the explanation for this phenomenon provided by his then six year old daughter Sheherzad some years ago. As he gleefully recounted his experiment, she pointed out matter-of-factly, “Of course, Daddy, if there were worms living in your favorite peeing spot, they would have to float up because of the water you were throwing on them!

Harvey was an exceptionally gifted child whose IQ could not be measured by the standardized tests that were given to the Midwood High students in Brooklyn. He was experimenting with little chemistry sets, and making home-made rockets at 6 years of age, and had read so much in Biology and Physics that he was excused from attending these classes throughout high school. He decided to study cancer at 15 years of age as a result of an early hypothesis he developed concerning the etiology of cancer, and he never wavered from this goal until he died. Harvey worked with some of the best minds in his field, his mentors included Phil Leder, Paul Marks, Charlotte Friend, Sol Spiegleman and James Holland. Harvey started his career in cancer by conducting pure molecular and cellular research, for a time concentrating on leukemias in rats and mice, but decided that it was more important to study freshly obtained human tumor cells and conduct clinical research since man must remain the measure of all things. Accordingly, he served his patients with extraordinary dedication, consideration, respect and manifested a deep understanding for the unspeakable tragedies they and their families face once a diagnosis of cancer is given to them. Harvey exercised supreme wisdom in dealing with cancer patients as well as in trying to understand the nature of the malignant process. He not only succeeded in providing better treatment options to patients, he also devoted a lifetime to nourishing and training young and hopeful researchers, providing them with inspiration, selfless guidance and protection so they could achieve their potential in the competitive and combative academic world. As a result, he was emulated and cherished enormously as a leader, original thinker, and beloved mentor by countless young scientists and physicians. In acknowledgment of his tireless efforts to inspire and challenge young students, especially those belonging to minority communities, or coming from impoverished backgrounds, Harvey was given the Martin Luther King Junior Humanitarian Award by the Science and Math Excellence Network of Chicago in 2002. Unfortunately, he was too sick to receive it in person, nonetheless, he was greatly moved by this honor.

Harvey traveled extensively to see the works of great masters first hand. He returned to Florence, Milan and Rome on an annual basis for years to see some of his favorites; the statue of Moses; the Unfinished Statues by Michelangelo; the Sistine chapel. He would travel to Amsterdam to visit the Van Gogh Museum, and to Paris so he could show little Sheherzad his beloved Picassos. His three greatest heroes were Moses, Einstein and Freud, and his study in every home we shared (Buffalo, Cincinnati and Chicago) had beautiful framed pictures of all three. Harvey had a curious mind, and read constantly. His areas of interest ranged from Kafka and Borges to physics, astronomy, psychology, anthropology, history, evolutionary biology, complexity, fuzzy logic, chaos, paleoanthroplogy, the American Civil War, theology, politics, biographies, social sciences, to science fiction. His books number in thousands. The breadth of his encyclopedic knowledge in so many areas, combined with his ability to use it in a manner appropriate for the time or to the occasion often astonished and delighted those who had serious discussions with him.

From Mark (Harvey’s son from his first marriage):

Our Dad was not a sentimental man. He was the ever scientist. Emotions clouded reason…and if you cannot see reason you may as well be blind. But Dad did have a side few were lucky enough to see. While he was always practical… He truly was an emotional man. He stood up for his beliefs and he never backed down. One of those beliefs was that it was important to die with dignity. No complaints, despite all the pain. He didn’t want to be a burden to his children or his wife. He never was. Azra said it best: Taking care of him was an honor, never a burden. There’s a Marcus Aurelius quote he often spoke of: “ Death stared me in the face and I stared right back.” Dad, you certainly did.

More than anything our Father was a family man. He cherished us and we cherished him. He often thanked us for all the days and nights spent by his side, but I told him there was no need for thanks. None of us could have been anywhere else. He and I often discussed his illness. He once asked me why he should keep fighting…what good was there in it? I told him his illness had brought our family much closer together. He smiled and said he was glad something good came of it.

Azra, he adored you. He often told me it was love at first sight. You two shared a love that only exists in fairy tales. Dad could be unconscious but still manage a smile when you walked into the room. I have never seen anything like it and I feel privileged to have witnessed your devotion to each other. The way you took care of him is inspiring. You never left his side and you refused to let him give up. No one could have done anything more for him and he knew it. He was very lucky to find you.

While going through his wallet I was shocked to find a piece of paper folded up in the back. On it were two quotes written in his own pen. I’d like to share one with you. “There isn’t much more to say. I have had no joy, but a little satisfaction from this long ordeal. I have often wondered why I kept going. That, at least I have learned and I know it now at the end. There could be no hope, no reward. I always recognized that bitter truth. But I am a man and a man is responsible for himself.” (The words of George Gaylord Simpson). Our Father died Sunday, May 19th at 3:20 in the afternoon. His family lives on with a love and closeness that will make him proud. Pop, we love you. You were our best friend. We will miss you everyday.

And thus Harvey lived, and thus he died. Proud to the end.

Death be not proud, though some have called thee
Mighty and dreadfull, for, thou art not so,
For, those, whom thou think'st, thou dost overthrow,
Die not, poore death, nor yet canst thou kill me.
From rest and sleepe, which but thy pictures bee,
Much pleasure, then from thee, much more must flow,
And soonest our best men with thee doe goe.

–John Donne

Monday, April 24, 2006

Talking Pints: Eurobashing, Some French Lessons

Europe_bI came to the US in 1991, shortly after Francis Fukuyama penned his famous “End of History and the Last Man” essay. Though much contested at the time, Fukuyama’s contention that there was only one option on the menu after the end of the Cold War – capitalism über alles – seemed, from my European social democratic perspective, worryingly prescient. After all, Europe’s immediate policy response was the Maastricht Treaty. Yet a moments reflection should have shown me that there was nothing inevitable about this victory of capitalism. As Karl Polanyi demonstrated, the establishment of capitalism was a political act, not a result of impersonal economic forces. And just as Lenin thought historical materialism needed his helping hand, it was reasonable to suppose that Fukuyama and those following him didn’t want to leave capitalism’s final triumph in Europe to the mere logic of (Hegelian) history. Post Cold War capitalism needed a helping hand in the form of reinforcing a new message: that while some kind of social democratic ‘Third Way’ between capitalism and socialism, the European Welfare State (EWS), was tolerable during the Cold War, now it was over, such projects were no longer desirable, or even possible.

As a consequence, following the Japan-bashing that was so popular in the US in the 1980s, Euro-bashing came to prominence in the 1990s. A slew of research was produced by US authors claiming that in this new world of ‘globalization’, time was up for the ‘bloated welfare states’ of Europe. Unable to tax and spend without provoking capital flight, EWS’s faced the choice of fundamental reform, (become just like the US) or wither and die. Fundamental reform was, of course, some combination of privatization, inflation control, a tight monetary policy, fiscal probity, more flexible labor markets, and of course, tax cuts. Some EWS’s embraced these measures during the 1990s, some did not, but interestingly, none of them died. In contrast to the dire predictions of the Euro-bashers, the ‘bloated old welfare states of Europe’ continued on their way. Such claims for the ‘end of the EWS’ were made consistently, in fact, almost endlessly, from 1991 until today, with apparently no ill effects.

_40989734_carap203x300_1Imagine then the sheer joy of the Euro-bashers upon finding the French (the bête noir of all things American and efficient) rioting the streets to protect their right not to be fired, and in the face of unemployment rates of almost 20 percent for those under 25. Was this not proof that the EWS has finally gone off the rails? John Tierney in the New York Times obviously thought so, arguing that “when French young adults were asked what globalization meant to them, half replied, “Fear.” Likewise Washington Post columnist Robert Samuelson opined, “Europe is history’s has-been. …Unwilling to address their genuine problems, Europeans become more reflexively critical of America. This gives the impression that they’re active on the world stage, even as they’re quietly acquiescing in their own decline.” Strong claims, but how the French employment law debacle was reported in the US was enlightening as the fact that it was given such coverage; not as the final proof of the EWS’s impending collapse, but as evidence of the strange myths, falsehoods, and half-assed reporting about Europe that is consistently passed off as fact in US commentary.

Consider the article by Richard Bernstein in the New York Times entitled “Political Paralysis: Europe Stalls on the Road to Economic Change”. In this piece Bernstein argues that Scandinavian states have managed to cut back social protections and thereby step-up growth, and that Germany under Schroeder managed to push through “a sharp reduction of unemployment benefits” t_1526390_arbeitsamtap300hat “have now made a difference.” Note the causal logic in both statements, if you cut benefits you get growth and employment. The problem is that both statements are flatly incorrect. Scandinavian countries have in many cases increased, rather than decreased, employment protections in recent years, and the German labor market reforms have indeed “made a difference.” German unemployment is now higher than ever, and the German government can cut benefits to zero and it probably will not make much of a difference to the unemployment rate. Unfortunately reporting things in this way wouldn’t signal the impending death of Europe. It wouldn’t fit the script. In fact, an awful of a lot of things about European economies are mis-reported in the US. The following are my particular favorites.

  • Europe is drowning in joblessness
  • Europe has much lower growth than the US
  • European productivity is much lower than that of the US

Let’s take each of these in turn:

Unemployment: It is certainly true that some European states currently have higher unemployment that the US; Germany, France and Italy being the prime examples, and it is commonly held that this is the result of inflexible labor markets. The story is however a bit more complex than this. First of all, European unemployment, if you think about it, is an empty category. When seen across a twenty year period, US unemployment is sometimes lower, sometimes higher than averaged-out European unemployment, and varies most with overall macroeconomic conditions. Consider that modern Europe contains oil rich Norwegians, poor Italian peasants, and unemployable post-communist Poles. The UK was deregulating its financial sector at the same time as Spain and Ireland were shedding agricultural labor. As such, not only is the category of ‘Europe’ empty, to speak of European unemployment is misleading at best.

Moreover, contrary to what Euro-bashers argue, the relationship between labor market flexibility and employment performance appears to run in exactly the opposite direction to that maintained. As David Howell notes, historically, “lower skilled workers in the United States have had…far higher unemployment rates relative to skilled workers than has been the case in…Northern European nations.” If so, one can hardly blame European unemployment on labor market rigidities since no such rigidities applied to these unemployed low-skilled Americans.

Indeed, why the US has a superior unemployment performance may have less to do with ‘flexibility’ and efficiency of labor markets than the US itself admits. Bruce Western and Katherine Beckett argue that “criminal justice policy [in the US] provides a significant state intervention with profound effects on employment trends.” Specifically, with $91 billion dollars spent on courts, police and prisons in contrast to $41 billion on unemployment benefits since the early 1990’s, the United States government distorts the labor market as much as any European state.

Western and Beckett used Bureau of Justice Statistics data to recalculate US adult male employment performance by including the incarcerated in the total labor pool. Taking 1995 as a typical year, the official unemployment rate was 7.1 percent for Germany and 5.6 percent for the US. However, once recalculated to include inmates in both countries, German unemployment rises to 7.4 percent while US unemployment rises to 7.5 percent. If one adds in to this equation the effect, post 9-11, the effects of a half a trillion dollar defense budget per annum, and 1.6 million people (of working age) under arms, then it may well be the case the US’s own labor markets are hardly as free and flexible as its often imagined, or that the causes of low unemployment lie therein.

Growth: Germany and France in particular do have very real problems with unemployment, but it has very little to do with flexibility of labor markets and a lot to do with the lack of growth. Take the case of Germany, the unemployment showcase of Euroland. From the mid 1990s until today its unemployment performance was certainly worse than the US, but it had also just bought, at a hopelessly inflated price, a redundant country of 17 million people (East Germany). It then integrated these folks into the West German economy, mortgaging the costs of doing so all over the rest of Europe via super-high interest rates that flattened Continental growth. Add into this the further contractions caused by adherence by Germany to the sado-monetarist EMU convergence criteria, and follow this up with the establishment of a central bank for all of Europe determined to fight an inflation that died 15 years previously, and yes, you will have low growth and this will impact employment. And yes, it is a self inflicted wound. And no, it has nothing to do with labor markets and welfare states. Germany is not Europe however, and should not be confused with it. The Scandinavian countries have all posted solid growth performances over the past several years, as have many of the new accession states.

Lifepak20assembly20lineProductivity: It is worth noting that a high employment participation rate and long working hours are seen in the US as being a good thing. This is strange however when one considers that according to economic theory, the richer a country gets, the less it is supposed to work. This is called the labor-leisure trade off, which the US seems determined to ignore. That Americans work much more hours than Europeans is pretty much all that explains the US’s superior productivity. As Brian Burgoon and Phineas Baxandall note, “in 1960 employed Americans worked 35 hours a year less than their counterparts in the Netherlands, but by 2000 were on the job 342 hours more.” By the year 2000, liberal regime hours [the US and the UK] were 13 percent more than the social democratic countries [Denmark, Sweden and Norway], and 30 percent more than the Christian Democratic countries [Germany, France, Italy].” Indeed, thirteen percent of American firms no longer give their employees any vacation time apart from statutory holidays. The conclusion? Europe trades off time against income. The US get more plasma TV’s and Europeans get to pick up their kids from school before 7pm. But the US is still more productive – right? Not quite.

Taking 1992 as a baseline year (index 100) and comparing the classic productivity measure – output per employed person in manufacturing – the US posts impressive productivity figures, from an index of 100 to 185.6 in 2004. Countries that beat this include Sweden, the ‘bloated welfare state’ par excellence, with an index value of 242.6. France’s figure of 150.1 is 20 percent less then the US, but considering that the average Frenchman works 30 percent less than the average American, the bad news is that France is arguably just as productive, it just trades-off productivity against time.

Equality and Efficiency: Most importantly, such comparatively decent economic performance has been achieved without the rise in inequality seen in the US. To use a summary measure of inequality, the GINI coefficient, which gauges between 0 (perfect equality) and 1 (perfect inequality), the US went from a GINI of 0.301 in 1979 to a GINI of 0.372 in 1997, a nineteen percent increase. Among developed states, only the UK beats the US in achieving a greater growth in inequality over the same period. While the US and the UK have seen large increases in income inequality, much of Europe has not. France, for example, actually reduced inequality from a GINI of 0.293 to 0.298 from 1979 to 1994. Germany likewise reduced its GINI from 0.271 to 0.264 between 1973 and 2000, as did the Netherlands, which went from 0.260 to 0.248 between 1983 and 1999. Moreover, despite an enormous increase in wealth inequality in the US, redistribution has not been as dramatic in Europe. While wealth inequality has increased in some countries such as Sweden, it has done so from such a low baseline that such states are still far more equalitarian today than the US was at the end of the 1970s. Today, the concentration of wealth in the US looks like pre-war Europe, while contemporary Europe looks more like the post-war United States.

Given all this, why then is Europe given such a bad press? Given space constraints I can only hazard some guesses. The intellectual laziness and lack of curiosity of the US media plays a part, as does the sheer fun of saying “we’re number one!” over and over again, I guess. What is also important is what John R. MacArthur of Harper’s Magazine noted in his response to the Tierney column discussed above; “As Tierney’s ideological predecessor (and former Republican press agent) William Safire well understood, when things get rough for your side, it’s useful to change the subject.”

Given this analysis, Euro-bashing, like Japan-bashing before it, contains within it two lessons. The first that that the desire to engage in such practices probably signals more about the state of the US economy than it does about the economy being bashed. Second, that while Europe does indeed have some serious economic problems, the usual suspects accused of causing these problems are really quite removed from the scene of the crime.

Sojourns: True Crime 2

Dukelacrosseeveryone_2Rape is unique among crimes because its investigation so often turns on the question of whether a crime actually happened. Was there or was there not a rape, did she or did she not consent, was she or was she not even able to consent? These sorts of questions are rarely asked about burglary or murder. And rarely do those accused of burglary or murder respond that such crimes didn’t happen (OJ didn’t say Nicole wasn’t killed, just that he didn’t do it). Most criminal investigations accordingly turn up a culprit who then defends him or herself by saying that he or she did not commit the crime. In contrast, most widely publicized rape cases involve culprits denying that a crime took place. She was not raped; we had consensual sex. Or, she was not raped; we didn’t have sex at all.  And so therefore despite the best intentions of state legislatures and women’s advocacy groups, the prosecution of rape cases still often turns its attention to the subjective state of the victim. She consented at the time and now has changed her mind. She has made the entire story up out of malice or revenge or insanity.

Duke5_1The ongoing story at Duke University exacerbates these basic features of rape law in several respects. Most obviously, it places the ordinary uncertainty of the case in the whirlwind of publicity. As is often so in rape cases, the story is at root about whether or not a crime happened. Every detail of the incessant reporting has circulated around and over a core piece unknown data: not whether the woman consented to have sex, but whether anyone actually had sex at all. All of the attention paid to the DNA testing in the early stages of the investigation was in the hope that this question might be answered. Human testimony is fallible. Science is not. Or so we told by shows like CSI, with their virtuoso forensic detectives. And so we are led to believe by the well-publicized use of DNA testing in recent years to exonerate and incriminate defendants past and present. As it turns out, however, the DNA testing in this case only adds to the uncertainty. According to the District Attorney most rape cases involve no DNA at all. We thus await the evidence of her body itself, the sort of specific damage wrought by forcible sex. Her body will tell us the truth, and so get us out of the back and forth of merely verbal accusations and denials.

Duke_2_1Even this highly pitched sense of mystery and uncertainty is ultimately the ordinary stuff of well-publicized rape cases. Were the story only about crime or no-crime, the American desire for closure and distaste for the open-ended or the unsure would eventually kill off interest. What is distinctive about the Duke story is the particularly delicate politics of race and the specific context of college athletics. About the former, little more need be said than the obvious. The story is about a twenty-seven year old African American mother working her way through a historically black college who has accused three white students from the nearby elite university of rape. On this accusation rest several hundred years of history. Were this not so highly charged an accusation, the defendants’ strategy would surely be more corrosive than it has thus far been. Rape law places unusual and often unpleasant (and unfair) burden on the subjective position of the victim, on her sense of her own consent or her reliability as a witness. Thus Kobe Bryant was exonerated because the accuser was traduced in public. We have (thankfully) seen little of this so far in the Duke case, even though the accuser is an exotic dancer with a criminal record who worked for an escort service. The predictable course of events would be for the defendants to claim the accuser is deranged and unreliable and, as far as possible under state law, to bring in the shadier aspects of the woman’s employment and criminal history to do so. That this hasn’t happened, or hasn’t happened yet, is revealing about the way in which race works in public discourse.

Of course the story drew the kind of attention that it did at first not because the accuser was black but because the accused were lacrosse players at a major university. What has emerged is something like the dirty secret of athletics at an elite institution. Like Stanford or Michigan, Duke has always maintained a double image as at once an extremely selective, prestigious institution of higher-learning and a powerhouse in several key sports (especially basketball). Unlike the Ivy League, Duke and Stanford actively recruit and provide full athletic scholarships for athletes. They also maintain a vigorous booster culture of fans and alumni. The result is a separate culture for “student” athletes, who don’t really have to take the same classes as everyone else, and who are apparently coddled in lifestyles of abuse and debauchery.

Duke22The agreed-upon facts of the case ought to be seen in this light. The lacrosse team threw a party for themselves and hired two exotic dancers from a local escort service. The dancers arrived and performed their routine surrounded by a ring of taunting and beer-drinking men. Alone and without security, they complained of their treatment and left. They were coaxed back inside. One claims to be have been raped. Whatever sexual assault may or may not have taken place, the facts of the case are set against the backdrop of an aggressive Neanderthalism that is precisely the sort of thing a university should be designed to counter.

As with most accusations of rape, the legal case is certain to revolve around the question of whether a crime happened. The coverage will most likely turn to a predictable discussion of credibility combined with new revelations about the accuser and defendants’ relative truthfulness. One shouldn’t forget what this case has already revealed.

Temporary Columns: Nationalism and Democracy

I was invited by Dr. Luis Rodriguez-Pineiro to give a lecture at his class on the History of Law at the Universidad de Sevilla. Dr. Pineiro works on indigenous rights and has just published a book, “Indigenous Peoples, Postcolonialism and International Law”. He asked me to speak on issues related to national identity and political democracy. I have, like many of us, struggled with these issues, intellectually and politically. Why is it hard to avoid discussions of ethnonational identity when we talk about political democracy? Why do those who advocate nationalism, particularly a nationalism of the ethnic variety, tend to politically persist, if not out-maneuver, those who advocate a more neutral form of political community when it comes to defining the state? Or more simply, why is it that it is hard for us to avoid some allusion to national culture in our discussions of political community.

Democracy is a theory about how we ought to treat each other if we live in the same political community. It describes the rules through which we may engage with each other, i.e., the powers our rulers may have over us, and the rights we may have against them. These are well developed and argued in democratic theory. These powers are very familiar to most of us – the basic rights of expression, association and conscience. The right to vote and elect representatives of our choice who may form a government. Political thinkers have given these issues much thought. They have described and argued in great detail how we ought to regulate ourselves politically and what claims we may make against each other or the state. We may indeed differ about the nature of these powers – libertarians might think all that is required is to protect some basic liberties. Social democrats may argue that what we need is a state that taxes the rich and transfers money to the poor. Whatever their disagreements – which are indeed plenty – libertarians and social democrats do not disagree that what they are talking about is the political regulation of the relationship among citizens within a political community.

While they have well developed theories and debates about internal regulation of a political community, neither social democrats nor libertarians have anything close to a theory about the boundaries of a political community. Their theories developed over hundreds of years fail to tell us what the limits of a political community are. For example, if Sri Lanka and India are indeed democracies, why shouldn’t they be one country ruled from Colombo? This is where nationalism comes in.

Nationalism is a theory about the boundaries of the political community, i.e., who is in and who is out. Nationalism argues that the political community, if it is not to be simply an accident of history or an agglomeration of unconnected social groups, needs to be based on something more. That something more is the way of life of a group of people, defined by language, religion, region or culture. This is a way of life or culture of a political community that precedes the political community on which it is based. Of course nationalist theories differ on what ought to form the basis of the political community. The Zionists, the Wahhabis, and the Hindutvas, believe that it should be religion. The Catalans, the Tamils, and the French believe it should be language, and so on. Whatever the problems with these efforts at constructing a political community, they do have some theory about the boundaries of such a community. But nationalism has no theory about the rules and regulations that govern the interaction among members of a political community. These members could live in a dictatorship, a democracy or even a monarchy.

As social democrats who believe in combining social equality and political freedom, we have an inadequate answer to the question of whom we should share this freedom and equality with. One answer, the world, is insufficient. It is too vague and abstruse, because it allows to us to get away from the actual concrete commitments – such as taxing and redistributing – that is required by such sharing. The other answer – we should share with those who are either like or close to us seems both too concrete and too narrow. Should it be with those who speak like us, live near us and look like us? We are uncomfortable with this response because the instinct animating it seems to foster intolerance and inequality.

So whether we like it or not, nationalism finds a way to creep into our theories of political democracy because of the silence of political theory about the boundaries of a political community. As a political theorist, I am troubled by this silence intellectually and may look for answers to it. As a political activist I am sympathetic to this silence, wish to nurture it, and maybe even require it of my fellow citizens. I am wary that probing it too much may lead to the kind of answers that make it harder for me to make the case for sharing power, wealth, and space with those who happen to live together with me in the same political community as citizens, even if they do not look like me, speak the same language, and pray to the same gods.

monday musing: minor thoughts on cicero

Cicero may very well have been the first genuine asshole. He wasn’t always appreciated as such. During more noble and naïve times, people seem to have accepted his rather moralistic tracts like ‘On Duties’ and ‘On Old Age’ as untainted wisdom handed down through the eons. This, supposedly, was a gentle man doing his best in a corrupted age. It was easier to palate that kind of interpretation during Medieval and early Renaissance times because many of his letters had been lost and forgotten. But Petrarch found some of them again around 1345 and the illusion of Cicero’s detached nobility became distinctly more difficult to pass off. Reading his letters, you can’t help but feel that Cicero really was a top-notch asshole. He schemed and plotted with the best of them. His hands were never anything but soiled.

Now, I think it may be clear that I come to praise Cicero, not to bury him. Even in calling him an asshole I’m handing him a kind of laurel. Because it is the particular and specific way that he was an asshole that picks him up out of history and plunks him down as a contemporary, as someone even more accessible after over two thousand years than many figures of the much more recent past. Perhaps this is a function of the way that history warps and folds. The period of the end of the Roman Republic in the last century BC speaks to us in ways that even more recent historical periods do not. Something about its mix of corruption and verve, cosmopolitanism and rank greed, self-destructiveness and high-minded idealism causes the whole period to leap over itself. And that is Cicero to a ‘T’. He is vain and impetuous, self-serving and conniving. He lies and cheats and he puffs himself up in tedious speech after tedious speech. It’s pretty remarkable. But he loved the Republic for what he thought it represented and he dedicated his life, literally, to upholding that idea in thought and in practice.

In what may be the meanest and most self-aggrandizing public address of all time, the Second Philippic Against Antony, Cicero finds himself (as usual) utterly blameless and finds Antony (as usual) guilty of almost every crime imaginable. It’s a hell of a speech, called a ‘Philippic’ because it was modeled after Demosthenes’ speeches against King Philip of Macedon, which were themselves no negligible feat in nasty rhetoric.

One can only imagine the electric atmosphere around Rome as Cicero spilled his vitriol. Caesar had only recently been murdered. Sedition and civil war were in the air. Antony was in the process of making a bold play for dictatorial power. Cicero, true to his lifelong inclinations, opposes Antony in the name of the restoration of the Republic and a free society. In his first Philippic, Cicero aims for a mild rebuke against Antony. Antony responds with a scathing attack. This unleashes the Second Philippic. “Unscrupulousness is not what prompts these shameless statements of yours,” he writes of Antony, “you make them because you entirely fail to grasp how you are contradicting yourself. In fact, you must be an imbecile. How could a sane person first take up arms to destroy his country, and then protest because someone else had armed himself to save it?”

Cicero’s condescension is wicked. “Concentrate, please—just for a little. Try to make your brain work for a moment as if you were sober.” Then he gets nasty. Of Antony’s past: “At first you were just a public prostitute, with a fixed price—quite a high one too. But very soon Curio intervened and took you off the streets, promoting you, you might say, to wifely status, and making a sound, steady, married woman of you. No boy bought for sensual purposes was ever so completely in his master’s powers as you were in Curio’s.”

Cicero finishes the speech off with a bit of high-minded verbal self-sacrifice:

Consider, I beg you, Marcus Antonius, do some time or other consider the republic: think of the family of which you are born, not of the men with whom you are living. Be reconciled to the republic. However, do you decide on your conduct. As to mine, I myself will declare what that shall be. I defended the republic as a young man, I will not abandon it now that I am old. I scorned the sword of Catiline, I will not quail before yours. No, I will rather cheerfully expose my own person, if the liberty of the city can be restored by my death.

May the indignation of the Roman people at last bring forth what it has been so long laboring with. In truth, if twenty years ago in this very temple I asserted that death could not come prematurely upon a man of consular rank, with how much more truth must I now say the same of an old man? To me, indeed, O conscript fathers, death is now even desirable, after all the honors which I have gained, and the deeds which I have done. I only pray for these two things: one, that dying I may leave the Roman people free. No greater boon than this can be granted me by the immortal gods. The other, that every one may meet with a fate suitable to his deserts and conduct toward the republic.

If the lines are a bit much, remember that Cicero was to be decapitated by Antony’s men not long afterward, and, for good measure, to have his tongue ripped out of his severed head by Antony’s wife, so that she might get final revenge on his powers of speech. It’s not every asshole that garners such tributes.

***

Around the time that he re-discovered some of Cicero’s letters, Petrarch started writing his own letters to his erstwhile hero. In the first, Petrarch writes,

Of Dionysius I forbear to speak; of your brother and nephew, too; of Dolabella even, if you like. At one moment you praise them all to the skies; at the next fall upon them with sudden maledictions. This, however, could perhaps be pardoned. I will pass by Julius Caesar, too, whose well-approved clemency was a harbour of refuge for the very men who were warring against him. Great Pompey, likewise, I refrain from mentioning. His affection for you was such that you could do with him what you would. But what insanity led you to hurl yourself upon Antony? Love of the republic, you would probably say. But the republic had fallen before this into irretrievable ruin, as you had yourself admitted. Still, it is possible that a lofty sense of duty, and love of liberty, constrained you to do as you did, hopeless though the effort was. That we can easily believe of so great a man. But why, then, were you so friendly with Augustus? What answer can you give to Brutus? If you accept Octavius, said he, we must conclude that you are not so anxious to be rid of all tyrants as to find a tyrant who will be well-disposed toward yourself. Now, unhappy man, you were to take the last false step, the last and most deplorable. You began to speak ill of the very friend whom you had so lauded, although he was not doing any ill to you, but merely refusing to prevent others who were. I grieve, dear friend at such fickleness. These shortcomings fill me with pity and shame. Like Brutus, I feel no confidence in the arts in which you are so proficient.

Indeed, it seems that Cicero was just a fickle man looking out for Number One, and maybe he’d stumble across a little glory in the process. Still, even that isn’t entirely fair. As Petrarch admits in his disappointed letter, some concept of the Republic and human freedom was driving Cicero all along. But the Republic was always a sullied thing, even from the beginning. The concept of freedom was always mixed up with self-interest and the less-than-pure motivations of human creatures. Cicero got himself tangled up in the compromised world of political praxis precisely because he was uninterested in a concept of freedom that hovered above the actual world with practiced distaste and a permanent scowl. I like to think of him as an asshole because I like to think of him as one of us, neck-deep in a river of shit and trying his best to find a foothold, one way or another. Dum vita est, spes est (‘While there’s life, there’s hope’).

Monday, April 17, 2006

Lunar Refractions: “Our Biggest Competitor is Silence”

I really wish I had the name of the Muzak marketer who provided this quote as it appeared in the 10 April issue of the New Yorker magazine. Silence is one of my dearest, rarest, companions, and this marketer unexpectedly emphasized its power by crediting it as the corporation’s chief competitor—no small role for such a subtle thing.

My initial, instinctual, and naturally negative reply was that, though this claim might be comforting to some, it’s also dead wrong. In most places, silence lost the battle long ago. A common strain that now unites what were once very disparate places and cultures seems to be the increasing endangerment—and in some cases extinction—of silence. I think about this a lot, especially living in a place where for much of the day loud trucks idle at length below my apartment, providing an aggravating background hum that I’ve never quite managed to relegate to the background. I lost fifteen minutes the other day fuming about the cacophonous chorus of car alarm, cement truck, and blaring car radio that overpowered any defense my thin windows might lamely try to muffle it with, not to mention the work I was trying to concentrate on. I’d buy earplugs, but noise of this caliber is also a physical, pounding presence. I admit that this sensitivity is my own to deal with, but something makes me doubt I’m alone in New York; in certain neighborhoods, and often outside of a hospital, there are several signs posted along the street, “Unnecessary Noise Prohibited.” I wonder who defines the term unnecessary, and how. Other signs warn drivers that honking the car horn in certain areas can be punished with hefty fines. A couple of years ago the same magazine cited above ran a piece—I believe it was in the Talk of the Town section—covering a local activist working to ban loud car alarms. Since silent alarms are now readily available, and have proven more effective, there really is no need for these shrill alarms. My absolute favorite ones are those set off by the noise of a passing truck, just as one apartment-dweller might crank up the volume on the stereo to drown out a neighbor’s noise. Aural inflation runs rampant.

But the comment of the Muzak marketer wasn’t enough to get me to set fingers to keyboard; what finally did it was a day-hike I took in the hills of the upper Hudson valley on Easter Sunday. I almost thought twice about escaping the city on this holiday, since—no matter how agnostic, multicultural, or 24/7 this city might be—such days always bring a rare calm. For just a few precious hours we’re spared the sound of garbage trucks carrying our trash away from us while replacing it with a different sort of pollution, and spared many other noisy byproducts of our so-called progress. As I was walking through the woods, a wind kicked up, rustling the leaves packed down by winter snow, and I was reminded of just how loud the sound of wind through bare tree branches overhead can be. Most people would probably say that wind in trees is quieter, and less disturbing, than more urban sounds, but I was reminded yesterday that that isn’t always the case.

Manetsilence_1So I set out to briefly investigate silence—why some people can’t seem to find any, why so many do everything in their power rid themselves of it, and why many just don’t seem to give it any thought, unobtrusive as it is. It has played a major role in many religions, from the tower of silence of Persian Zoroastrianism to the Trappist monks’ vows of silence; one could speculate, in a cursory way, that the rise of secular culture was accompanied by a rise in volume. I came across a curious coincidence while checking out the etchings of Manet recently that would support such a conclusion. While the painter of Olympia has often been called the least religious of painters, an etching of his done around 1860 (in the print collection of the New York Public Library) portrays a monk, tablet or book in hand and finger held to lips, with the word Silentium scrawled below. Given the connotative relationship between silence and omission, obilivion, and death, Manet’s etching has interesting implications for both silence and religion as they were seen in nineteenth-century Paris. If not secularization, perhaps industrialization ratcheted everything up a few decibels.

Silence—of both good and bad sorts—runs through everything, leaving traces throughout many languages. There are silent films, which exist only thanks to a former lack of technology, and were usually accompanied by live music. Some people’s ideal mate is a classic man of the strong, silent type—adjectives never jointly applied to a woman. A silentiary is (well, was, since I doubt many people go into such a line of work nowadays) a confidant, counselor, or official who maintains silence and order. Cones of silence appear in politics, radar technology, nineteen-fifties and sixties television shows, and science fiction novels. After twenty years of creating marvelous music out of what could be derogatively deemed noise, the band Einstürzende Neubauten came out with both a song and album titled “Silence is Sexy.” Early on the band’s drummer, Andrew Chudy, adopted the name N. U. Unruh—a wild play on words that can be connected to a German expressionist poet and playwright, a piece of timekeeping equipment, and, aptly, a riff on the theme of disquiet or unrest.

LeopardiGetting back to my stroll in the woods, when considering the peace and quiet of a holiday I inevitably turn to poet Giacomo Leopardi’s songs in verse. His thirteenth canto (“La sera del dì di festa,” “The Evening of the Holiday”), laments the sad, weighty quietness left after a highly anticipated holiday. The falling into silence of a street song at the end is a death knell for the past festivities. In keeping with this, his twenty-fifth canto (“Il sabato del villaggio,” “Saturday Night in the Village”) praises Saturday’s energetic sounds of labor in preparation for the Sunday holiday, saving only melancholy words for the day of rest itself and its accompanying quiet. I don’t wish to summarize his rich and very specific work, so encourage you to have a look at it for yourself. The fact that these were written across an ocean and over a century ago attests to the fact that silence is not golden for everyone. Were he to live today, Leopardi might well be one of the iPod-equipped masses.

When I found that Leopardi’s opinion differed from my own, I looked to another trustworthy poet for a little support in favor of my own exasperation. Rainer Maria Rilke, in his famous fifth letter to the less famous young poet, written in the autumn of 1903, is evidently dependant on silence:

“… I don’t like to write letters while I am traveling, because for letter writing I need more than the most necessary tools: some silence and solitude and a not too familiar hour…. I am still living in the city… but in a few weeks I will move into a quiet, simple room, an old summerhouse, which lies lost deep in a large park, hidden from the city, from its noises and incidents. There I will live all winter and enjoy the great silence, from which I expect the gift of happy, work-filled hours….”

SenecapergamonmuseumTo break the tie set by Leopardi and Rilke, I turned to another old friend for comfort, and was surprised to find none. Seneca, in his fifty-sixth letter to Lucilius, asserts that it is the placation of one’s passions, not external silence, that gives true quiet:

“May I die if silence is as necessary as it would seem for concentration and study. Look, I am surrounded on every side by a beastly ruckus…. ‘You’re a man of steel, or you’re deaf,’ you will tell me, ‘if you don’t go crazy among so many different, dissonant noises…’. Everything outside of me might just as well be in an uproar, as long as there is no tumult within, and as long as desire and fear, greed and luxury don’t fight amongst themselves. The idea that the entire neighborhood be silent is useless if passions quake within us.”

In this letter he lists the noises that accompany him on a daily basis: the din of passing horse-drawn carriages, port sounds, industrial sounds (albeit those of the first century), neighborhood ball players, singing barbers, numerous shouting street vendors, and even people “who like to hear their own voices as they bathe.” It sounds as though he’s writing from the average non-luxury apartment of today’s cities. His point that what’s important is interior calm, not exterior quiet, exposed my foolishness.

À propos of Seneca and serenity, a friend of mine recently bought an iPod. A year ago we had a wonderful conversation where she offered up her usual, very insightful criticisms of North American culture: “What is wrong with this country? Everyone has a f****** iPod, but so few people have health insurance! Why doesn’t anyone rebel, or even seem to care?” As I walked up to meet her a couple of weeks ago I spotted from afar the trademark white wires running to each ear. “I love this thing. I mean, sure, I don’t think at all anymore, but it’s great!” To say that this brilliant woman doesn’t think anymore is crossing the line, but it’s the perfect hyperbole that nears the truth; if you can fill your ears with constant diversion, emptying the brain is indeed easier. The question, then, is what companies like Muzak and their clients can then proceed to fill our minds with if we’re subject to their sounds.

This relates to the ancient sense of otium as well—Seneca’s idea that creativity and thought need space, room, or an empty place and time in which to truly develop. Simply defining it as leisure time or idleness neglects its constructive nature. The idea that, when left at rest, the mind finds or creates inspiration for itself, and from that develops critical thought, is key to why I take issue with all this constructed, mass-marketed sound and “audio architecture.” While it might seem that an atmosphere filled with different stimuli and sounds would spark greater movement, both mental and physical, I think we’ve reached the point where that seeming activity is just that—an appearance, and one that sometimes hides a great void.

AlicecooperIn closing, for those interested, we may finally be able to give credit to the Muzak marketer who inspired me. On Tuesday, 18 April, John Schaefer will discuss Muzak on WNYC’s Soundcheck. In the meantime, I’ll leave you with a gem from the September 1969 issue of Poppin magazine. In music critic Mike Quigley’s interview with Alice Cooper, the latter discussed what he’s looking for between himself and the audience: “If it’s total freedom, I guess the ultimate thing you can go into is total silence between the audience and performer, with the performer projecting something he doesn’t even have to play. A total silence trip is the ultimate.” Even Muzak can’t counter that.

Selected Minor Works: Of the Proper Names of Peoples, Places, Fishes, &c.

Justin E. H. Smith

When I was an undergraduate in the early 1990s, an outraged student activist of Chinese descent announced to a reporter for the campus newspaper: “Look at me! Do I look ‘Oriental’? Do you see anything ‘Oriental’ about me?  No. I’m Asian.”  The problem, however, is that he didn’t look particularly ‘Asian’ either, in the sense that there is nothing about the sound one makes in uttering that word that would have some natural correspondence to the lad’s physiognomy.  Now I’m happy to call anyone whatever they want to be called, even if personally I prefer the suggestion of sunrises and sunsets in “Orient” and “Occident” to the arbitrary extension of an ancient (and Occidental) term for Anatolia all the way to the Sea of Japan.  But let us be honest: the 1990s were a dark period in the West to the extent that many who lived then were content to displace the blame for xenophobia from the beliefs of the xenophobes to the words the xenophobes happened to use.  Even Stalin saw that to purge bourgeois-sounding terms from Soviet language would be as wasteful as blowing up the railroad system built under the Tsar.

In some cases, of course, even an arbitrary sound may take on grim connotations in the course of history, and it can be a liberating thing to cast an old name off and start afresh.  I am certainly as happy as anyone to see former Dzerzhinsky Streets changed into Avenues of Liberty or Promenades of Multiparty Elections.  The project of pereimenovanie, or re-naming, was as important a cathartic in the collapsed Soviet Union as perestroika, or rebuilding, had been a few years earlier.  If the darkest period of political correctness is behind us, though, this is in part because most of us have realized that name-changes alone will not cut it, and that a real concern for social justice and equality that leaves the old bad names intact is preferable to a cosmetic alteration of language that allows entrenched injustice to go on as before– pereimenovanie without perestroika

But evidently the PC coffin could use a few more nails yet, for the naive theory of language that guided the demands of its vanguard continues to inform popular reasoning as to how we ought to go about calling things.  Often, it manifests itself in what might be called pereimenovanie from the outside, which turns Moslems into Muslims, Farsi into Persian, and Bombay into Mumbai, as a result of the mistaken belief on the part of the outsiders that they are thereby, somehow, getting it right.  This phenomenon, I want to say, involves not just misplaced moral sensitivity, but also a fundamental misunderstanding of how peoples and places come by their names. 

Let me pursue these and a few other examples in detail.  These days, you’ll be out on your ear at a conference of Western Sinologists if you say “Peking” instead of “Beijing.”  Yet every time I hear a Chinese person say the name of China’s capital city, to my ear it comes out sounding perfectly intermediate between these two.  Westerners have been struggling for centuries to come up with an adequate system of transliteration for Chinese, but there simply is no wholly verisimilar way to capture Chinese phonology in the Latin alphabet, an alphabet that was not devised with Chinese in mind, indeed that had no inkling of the work it would someday be asked to do all around the world.  As Atatürk showed with his Latinization of Turkish, and Stalin with his failed scheme for the Cyrillicization of the Baltic languages, alphabets are political as hell. But decrees from the US Library of Congress concerning transliteration of foreign alphabets are not of the same caliber as the forced adoption of the Latin or Cyrillic scripts.  Standardization of transliteration has more to do with practical questions of footnoting and cataloguing than with the politics of identity and recognition.

Another example.  In Arabic, the vowel between the “m” and the “s” in the word describing an adherent of Islam is a damma.  According to Al-Ani and Shammas’s Arabic Phonology and Script (Iman Publishing, 1999), the damma is “[a] high back rounded short vowel which is similar to the English “o” in the words “to” and “do”.” So then, “Moslem” or “Muslim”?  It seems Arabic itself gives us no answer to this question, and indeed the most authentic way to capture the spirit of the original would probably be to leave the vowel out altogether, since it is short and therefore, as is the convention of Arabic othography, unwritten.   

And another example.  Russians refer to Russia in two different ways: on the one hand, it is Rus’, which has the connotation of deep rootedness in history, glagolithic tablets and the like, and is often modified by the adjective “old”; on the other hand it is Rossiia, which has the connotation of empire and expanse, engulfing the hunter-gatherers of Kamchatka along with the Slavs at the empire’s core.  Greater Russia, as Solzhenitsyn never tires of telling us, consists in Russia proper, as well as Ukraine (the home of the original “Kievan Rus'”), and that now-independent country whose capital is Minsk.  Minsk’s dominion is called in German “Weissrussland,” and in Russian “Belorussiia.”  In other words, whether it is called “Belarus” or “Belorussia” what is meant is “White Russia,” taxonomically speaking a species of the genus “Russia.”  (Wikipedia tells us that the “-rus” in “Belarus” comes from “Ruthenia,” but what this leaves out is that “Ruth-” itself is a variation on “Rus’,” which, again is one of the names for Muscovite Russia as well as the local name for White Russia.) 

During the Soviet period, Americans happily called the place “Belorussia,” yet in the past fifteen years or so, the local variant, “Belarus,” has become de rigueur for anyone who might pretend to know about the region.  Of course, it is admirable to respect local naming practices, and symbolically preferring “Belarus” over “Belorussia” may seem a good way to show one’s pleasure at the nation’s newfound independence from Soviet domination. 

However (and here, mutatis mutandis, the same point goes for Mumbai), I have heard both Americans and Belarusans say the word “Belarus,” and I daresay that when Americans pronounce it, they are not saying the same word as the natives.  Rather, they are speaking English, just as they were when they used to say “Belorussia.”  Moreover, there are plenty of perfectly innocuous cases of inaccurate naming.  No one has demanded (not yet, anyway) that we start calling Egypt “Misr,” or Greece “Hellas.”  Yet this is what we would be obligated to do if we were to consistently employ the same logic that forces us to say “Belarus.”  Indeed, even the word we use to refer to the Germans is a borrowing from a former imperial occupier –namely, the Romans– and has nothing to do with the German’s own description of themselves as Deutsche.

In some cases, such as the recent demand that one say “Persian” instead of “Farsi,” we see an opposing tendency: rather than saying the word in some approximation of the local form, we are expected to say it in a wholly Anglicized way.  I have seen reasoned arguments from (polyglot and Western-educated) natives for the correctness and sensitivity of “Mumbai,” “Persian,” “Belarus,” and “Muslim,” but these all have struck me as rather ad hoc, and, as I’ve said, the reasoning for “Persian” was just the reverse of the reasoning for “Mumbai.”  In any case, monolingual Persian speakers and residents of Mumbai themselves could not care less. 

Perhaps the oddest example of false sensitivity of this sort comes not in connection with any modern ethnic group, but with a race of hominids that inhabited Europe prior to the arrival of the homo sapiens and were wiped out by the newcomers about 29,000 years ago.  In the 17th century, one Joachim Neumann adopted the Hellenized form of his last name, “Neander,” and proceeded to die in a valley that subsequently bore his name: the Neanderthal, or “the valley of the new man.”  A new man, of sorts, was found in that very valley two centuries later, to wit, the Homo neanderthalensis.

Now, as it so happens, “Thal” is the archaic version of the German word “Tal.”  Up until the very recent spelling reforms imposed at the federal level in Germany, vestigial “h”s from earlier days were tolerated in words, such as “Neanderthal,” that had an established record of use.  If the Schreibreform had been slightly more severe, we would have been forced to start writing “Göte” instead of the more familiar “Goethe.”  But Johann Wolfgang was a property the Bundesrepublik knew it dare not touch. The “h” in “Neanderthal” was however axed, but the spelling reform was conducted precisely to make German writing match up with German speech: there never was a “th” sound in German, as there is in English, and so the change from “Thal” to “Tal” makes no phonetic difference. 

We have many proper names in North America that retain the archaic spelling “Thal”, such as “Morgenthal” (valley of the morning), “Rosenthal” (valley of the roses), etc., and we happily pronounce the “th” in these words as we do our own English “thaw.”  Yet, somehow over the past ten years or so Americans have got it into their heads that they absolutely must say Neander-TAL, sans voiceless interdental fricative, as though this new standard of correctness had anything to do with knowledge of prehistoric European hominids, as though the Neanderthals themselves had a vested interest in the matter.  I’ve even been reproached myself, by a haughty, know-it-all twelve-year-old, no less, for refusing to drop the “th”. 

The Neanderthals, I should not have to point out, were illiterate, and the presence or absence of an “h” in the word for “valley” in a language that would not exist until several thousand years after their extinction was a matter of utter indifference to them.  Yet doesn’t the case of the Neanderthal serve as a vivid reductio ad absurdum of the naive belief that we can set things right with the Other if only we can get the name for them, in our own language, right?  The names foreigners use for any group of people (or prehuman hominids, for that matter) can only ever be a matter of indifference for that group itself, and it is nothing less than magical thinking to believe that if we just get the name right we can somehow tap into that group’s essence and refer to them not by some arbitrary string of phonemes, but as they really are in their deepest and truest essence. 

This magical thinking informs the scriptural tradition of thinking about animals, according to which the prelapsarian Adam named all the different biological kinds not with arbitrary sounds, but in keeping with their true natures.  Hence, the task of many European naturalists prior to the 18th century was to rediscover this uncorrupted knowledge of nature by recovering the lost language of Adam, and thus, oddly enough, zoology and Semitic philology cosnstituted two different domains of the same general project of inquiry. 

Some very insightful thinkers, such as Gottfried Leibniz, noticed that ancient Hebrew too, just like modern German, is riddled with corrupt verb forms and senseless exceptions to rules, and sharply inferred from this that Hebrew was no more divine than any vulgate.  Every vocabulary human beings have ever come up with to refer to the world around them has been nothing more than an arbitrary, exception-ridden, haphazard set of sounds, and in any case the way meanings are produced seems to have much more to do with syntax –the rules governing the order in which the sounds are put together– than with semantics– the correspondence between the sounds and the things in the world they are supposed to pick out. 

This hypercorrectness, then, is ultimately not just political, but metaphysical as well.  It betrays a belief in essences, and in the power of language to pick these out.  As John Dupré has compellingly argued, science educators often end up defending a supercilious sort of taxonomical correctness when they declaim that whales are not fish, in spite of the centuries of usage of the word “fish” to refer, among other things, to milk-producing fish such as whales.  The next thing you know, smart-ass 12-year-olds are lecturing their parents about the ignorance of those who think whales are fish, and another generation of blunt-minded realists begins its takeover.  Such realism betrays too much faith in the ability of authorities –whether marine biologists, or the oddly prissy postmodern language police in the English departments– to pick out essences by their true names.  It is doubtful that this faith ever did much to protect anyone’s feelings, while it is certain that it has done much to weaken our descriptive powers, and to take the joy out of language. 

Negotiations 7: Channeling Britney

(Note: Jane Renaud wrote a great piece on this subject last week. I hope the following can add to the conversation she initiated.)

When I first heard of Daniel Edwards’ Britney sculpture (Monument to Pro-Life), I was fascinated. What a rich stew: a pop star whose stock-in-trade has been to play the innocent/slut (with rather more emphasis on the latter) gets sculpted by a male artist as a pro-life icon and displayed in a Williamsburg gallery! Gimmicky, to be sure; nonetheless, the overlapping currents of Sensationalism, Irony and Politics were irresistible, so I took myself out to the Capla Kesting Fine Art Gallery on Thursday to have a look.

I am not a fan of pop culture. My attitude toward it might best be characterized a Swiss. In conversation, I tend to sniff at it. “Well,” I have been known to say, “it may be popular, but it’s not culture.” I do admit to a lingering fondness for Britney, but that has lees to do with her abilities as chanteuse than it does with the fact that, as a sixteen-year-old boy, I moved from the WASPy northeast to Nashville, Tennessee and found myself studying in a seraglio of golden-haired, pig-tailed, Catholic schoolgirls, each one of them a replica of early Britney and each one of them, like her, as common and as unattainable as a species of bird. What can I say? I was sixteen. Despise the sin, not the sinner.

I was curious to know the extent to which this sculpture would be a monument to pop culture—did the artist, Daniel Edwards, fancy himself the next Jeff Koons?—and surprised to discover that, having satisfied my puerile urges (a surreptitious glance at the breasts, a disguised study of the money shot), my experience of the piece was in no way mediated by my awareness that its model was a pop star. “Britney Spears” is not present in the piece, and its precursor is not Koons’ Michael Jackson and Bubbles or Warhol’s silk-screens of Marilyn Monroe. One has to go much further back than that. Its precursor is actually Michelangelo’s Pietá.

In both cases, the spectacular back story (Mary with dead Christ on her lap, Britney with Sean’s head in her cooch) is overwhelmed by the temporal event that grounds it; so that the Pietá is nothing more (nor less) than Mother and Dead Son, and Monument to Pro-Life becomes simply Woman Giving Birth. Where Koons and Warhol empty the role of the artist as creative genius and replace it with artist as mirror to consumer society, Edwards (and Michelangelo well before him) empty the divine (the divinity of Christ, the divinity of the star) and replace it with the human. Edwards, then, is doing something very tricky here, and if one can stomach the nausea-inducing gimmickry of the work, there’s a lot worth considering.

First of all is the composition of the work. The subject is on all fours, in a position that, as Jane Renaud wryly observed in these pages last week, might be more appropriate for getting pregnant than for giving birth. She is on a bear-skin rug; her eyes are heavily lidded, her lips slightly parted, as though she might be about to moan or to sing. And yet the sculpture is in no way pornographic or even titillating. There is nothing on her face to suggest either pain or ecstasy. The person seems to be elsewhere, even if her body is present, and the agony we associate with childbirth is elsewhere. In fact, with her fingers laid gently into the ears of the bear, not clutching or tearing at them, she seems to be channeling all her emotions into its head. Its eyes are wide open, its mouth agape and roaring. The subject is emptying herself, channeling at both ends, serenely so, a Buddha giving birth, without tension at the front end and without blood or tearing at the rear. The child’s head emerges as cleanly, and as improbably, as a perfect sphere from a perfect diamond. This is a revolution in birthing. Is that the reward for being pro-life? Which brings us to the conceptual component of Monument to Pro-Life.

To one side of the sculpture stands a display of pro-life literature. You cannot touch it; you cannot pick it up; you cannot read it even if you wanted to because it is in a case, under glass. This is not, I think, because there is not enough pro-life literature to go around, and it hints at the possibility that the artist is being deliberately disingenuous, that he is commenting both on the pro-life movement and on its monumental aspirations. The sculpture is out there in the air, naked and exposed, while the precious literature is encased and protected. Shouldn’t it be the other way around? It’s almost as if the artist is saying, “This is the pro-life movement’s relationship to women: It is self-interested and self-preserving; and in its glassed-in, easy righteousness it turns them into nothing more than vessels, emptying machines. It prefers monuments to mothers, literature to life.”

Now lest you think that I am calling Daniel Edwards the next Michelangelo, let me assure you that I most definitely am not. As conceptually compelling as I found Monument to Pro-Life to be, I also found it aesthetically repugnant. Opinions are like assholes—everybody has one—but this sculpture is hideous to look at. It’s made of fiberglass, for god’s sake, which gives it a reddish, resiny cast, as though the subject had been poached, and a texture which made me feel, just by looking at it, that I had splinters under my fingernails. I know we all live in a post-Danto age of art criticism, that ideas are everything now, and that the only criterion for judging a work of art is its success in embodying its own ideas; but as I left the gallery I couldn’t help thinking of Plato and Diogenes. When Plato defined man as a “featherless biped,” the Cynic philosopher is said to have flung a plucked chicken into the classroom, crying “Here is Plato’s man.” Well, here is Danto’s art. With a price tag of $70,000, which it will surely fetch, he can have it.

Monday Musing: The Palm Pilot and the Human Brain, Part II

Part II: How Brains Might Work

Chess7Two weeks ago I wrote the first part of this column in which I made an attempt to explain how it is that we are able to design very complex machines like computers: we do it by employing a hierarchy of concepts, each layer of which builds upon the layer below it, ultimately allowing computers to perform seemingly miraculous tasks like beating Gary Kasparov at chess at the highest levels of the hierarchy, while all the way down at the lowest layers, the only thing going on is that some electrons are moving about on a tiny wafer of silicon according to simple physical rules. [Photo shows Kasparov in Game 2 of the match.] I also tried to explain what gives computers their programmable flexibility. (Did you know, for example, that Deep Blue, the computer which drove Kasparov to hair-pulling frustration and humiliation in chess, now takes reservations for United Airlines?)

But while there is a difference between understanding something that we ourselves have built (we know what the conceptual layers are because we designed them, one at a time, after all) and trying to understand something like the human brain, designed not by humans but by natural selection, there is also a similarity: brains also do seemingly miraculous things, like the writing of symphonies and sonnets, at the highest levels, while near the bottom we just have a bunch of neurons connected together, digitally firing (action potentials) away, again, according to fairly simple physical rules. (Neuron firings are digital because they either fire or they don’t–like a 0 or a 1–there is no such thing as half of a firing or a quarter of one.) And like computers, brains are also very flexible at the highest levels: though they were not designed by natural selection specifically to do so, they can learn to do long-division, drive cars, read the National Enquirer, write cookbooks, and even build and operate computers, in addition to a million other things. They can even turn “you” off, as if you were a battery operated toy, if they feel they are not getting enough oxygen, thereby making you collapse to the ground so that gravity can help feed them more of the oxygen-rich blood that they crave (you know this well, if you have ever fainted).

Jeff_hawkinsTo understand how brains do all this, this time we must attempt to impose a conceptual framework on them from the outside, as it were; a kind of reverse-engineering. This is what neuroscience attempts to do, and as I promised last time, today I would like to present a recent and interesting attempt to construct just such a scaffolding of theory on which we might stand while trying to peer inside the brain. This particular model of how the brain works is due to Jeff Hawkins, the inventor of the Palm Pilot and the Treo Smartphone, and a well-respected neuroscientist. It was presented by him in detail in his excellent book On Intelligence, which I highly recommend. What follows here is really just a very simplified account of the book.

Let’s jump right into it then: Hawkins calls his model the “Memory-Prediction” framework, and its core idea is summed up by him in the following four sentences:

The brain uses vast amounts of memory to create a model of the world. Everything you know and have learned is stored in this model. The brain uses this memory-based model to make continuous predictions of future events. It is the ability to make predictions about the future that is the crux of intelligence. (On Intelligence, p. 6)

Hawkins focuses mainly on the neocortex, which is the part of the brain responsible for most higher level functions such as vision, hearing, mathematics, music, and language. The neocortex is so densely packed with neurons, that no one is exactly sure how many there are, though some neuroscientists estimate the number at about thirty billion. What is astonishing is to realize that:

Those thirty billions cells are you. They contain almost all your memories, knowledge, skills, and accumulated life experience… The warmth of a summer day and the dreams we have for a better world are somehow the creation of these cells… There is nothing else, no magic, no special sauce, only neurons and a dance of information… We need to understand what these thirty billion cells do and how they do it. Fortunately, the cortex is not just an amorphous blob of cells. We can take a deeper look at its structure for ideas about how it gives rise to the human mind. (Ibid., p. 43)

The neocortex is a thin sheet consisting of six layers which envelops the rest of the brain and is folded up in a crumpled way. This is what gives the brain its walnutty appearance. (If completely unfolded, it would be quite thin–only a couple of millimeters–and would cover an area about the size of a large dinner napkin.) Now, while the neocortex looks pretty much the same everywhere with its six layers, different regions of it are functionally specialized. For example, the Broca’s area handles the rules of linguistic grammar. Other areas of the neocortex have also been mapped out functionally in quite some detail by techniques such as looking at brains with localized damage (due to stroke or injury) and seeing what functions are lost in the patient. (Antonio Damasio presents many fascinating cases in his groundbreaking book Descartes’ Error.) But while everyone else was looking for differences in the various functional areas of the cortex, a very interesting observation was made by a neurophysiologist named Vernon Mountcastle (I was fortunate enough to attend a brilliant series of lectures by him on basic physiology while I was an undergraduate!) at Johns Hopkins University in 1978: he noticed that all the different regions of the neocortex look pretty much exactly the same, and have the same structure, whether they process language or handle touch. And he proposed that since they have the same structure, maybe they are all performing the same basic operation, and that maybe the neocortex uses the same computational tool to do everything. Mountcastle suggested that the only difference in the various areas are how they are connected to each other and to other parts of the nervous system. Now Hawkins says:

Scientists and engineers have for the most part been ignorant of, or have chosen to ignore, Mountcastle’s proposal. When they try to understand vision or make a computer that can “see,” they devise vocabulary and techniques specific to vision. They talk about edges, textures, and three-dimensional representations. If they want to understand spoken language, they build algorithms based on rules of grammar, syntax, and semantics. But if Mountcastle is correct, these approaches are not how the brain solves these problems, and are therefore likely to fail. If Mountcastle is correct, the algorithm of the cortex must be expressed independently of any particular function or sense. The brain uses the same process to see as to hear. The cortex does something universal that can be applied to any type of sensory or motor system. (Ibid., p. 51)

The rest of Hawkins’s project now becomes laying out in detail what this universal algorithm of the cortex is, how it functions in different functional areas, and how the brain implements it. First he tells us that the inputs to various areas of the brain are essentially similar and consist basically of spatial and temporal patterns. For example, the visual cortex receives a bundle of inputs from the optic nerve, which is connected to the retina in your eye. These inputs in raw form represent the image that is being projected onto the retina in terms of a spatial pattern of light frequencies and amplitudes, and how this image (pattern) is changing over time. Similarly the auditory nerves carry input from the ear in terms of a spatial pattern of sound frequencies and amplitudes which also varies with time, to the auditory areas of the cortex. The main point is that in the brain, input from different senses is treated the same way: as a spatio-temporal pattern. And it is upon these patterns that the cortical algorithm goes to work. This is why spoken and written language are perceived in a remarkably similar way, even though they are presented to us completely differently in simple sensory terms. (You almost hear the words “simple sensory terms” as you read them, don’t you?)

Now we get to one of Hawkins’s key ideas: unlike a computer (whether sequential or parallel), the brain does not compute solutions to problems; it retrieves them from memory: “The entire cortex is a memory system. It isn’t a computer at all.” (Ibid., p. 68) To illustrate what he means by this, Hawkins provides an example: imagine, he says, catching a ball thrown at you. If a computer were to try to do this, it would attempt to estimate its initial trajectory and speed and then use some equations to calculate its path, how long it will take to reach you, etc. This is not anything like what your brain does. So how does your brain do it?

When a ball is thrown, three things happen. First, the appropriate memory is automatically recalled by the sight of the ball. Second, the memory actually recalls a temporal sequence of muscle commands. And third, the retrieved memory is adjusted as it is recalled to accomodate the particulars of the moment, such as the ball’s actual path and the position of your body. The memory of how to catch a ball was not programmed into your brain; it was learned over years of repetitive practice, and it is stored, not calculated, in your neurons. (Ibid., p. 69)

At first blush it may seem that Hawkins is getting away with some kind of sleight of hand here. What does he mean that the memories are just retrieved and adjusted for the particulars of the situation? Wouldn’t that mean that you would need millions of memories for every single scenario like catching a ball, because every situation of ball-catching can vary from another in a million little ways? Well, no. Hawkins now introduces a way of getting around this problem, and it is called invariant representation, which we will get to soon. Cortical memories are different from computer memory in four ways, Hawkins tells us:

  1. The neocortex stores sequences of patterns.
  2. The neocortex recalls patterns auto-associatively.
  3. The neocortex stores patterns in an invariant form.
  4. The neocortex stores patterns in a hierarchy.

Let’s go through these one at a time. The first feature is why when you are telling a story about something that happened to you, you must go in sequence (and why often people include boring details in their stories!) or you may not remember what happened; like only being able to remember a song if you sing it to yourself in sequence, one note at a time. (You couldn’t recite the notes backward–or even the alphabet backward very fast–while a computer could.) Even very low-level sensory memories work this way: the feel of velvet as you run your hand over it is just the pattern of very quick sequential nerve firings that occurs as your fingers run over the fibers. This pattern is a different sequence in case you are running your hand over gravel, say, and that is how you recognize it. Computers can be made to store memories sequentially, such as a song, but they do not do this automatically, the way the cortex does.

Auto-associativity is the second feature of cortical memory and what it means is that patterns are associated with themselves. This makes it possible to retrieve a whole pattern when only a part of it is presented to the system.

…imagine you see a person waiting for a bus but can only see part of her because she is standing partially behind a bush. Your brain is not confused. Your eyes only see parts of a body, but your brain fills in the rest, creating a perception of a whole person that’s so strong you may not even realize you’re only inferring. (Ibid., p. 74)

Temporal patterns are also similarly retrieved and completed. In a noisy environment we often don’t hear every single word that someone is saying to us, but our brain fills in with what it expects to have heard. (If Robin calls me on Sunday night on his terrible cell phone and says, “Did you …crackle-pop… your Monday column yet?” My brain will automatically fill in the word “write.”) Sequences of memory patterns recalled auto-associatively essentially constitute thought.

Now we get to invariant representations, the third feature of cortical memory. Notice that while computer memories are designed for 100% fidelity (every bit of every byte is reproduced flawlessly), our brains do not store information this way. Instead, they abstract out important relationships in the world and store those, leaving out most of the details. Imagine talking to a friend who is sitting right in front of you. As you talk to her, the exact pattern of pixels coming over the optic nerve from your retina to your visual cortex is never the same from one moment to another. In fact, if you sat there for hours, no pattern would ever repeat because both of you are moving slightly, the light is changing, etc. Nevertheless you have a continuous sense of your friend’s face being in front of you. How does that happen? Because your brain’s internal pattern of representation of your friend’s face does not change, even though the raw sensory information coming in over the optic nerve is always changing. That’s invariant representation. And it is implemented in the brain using a hierarchy of processing. Just to give a taste of what that means, every time your friend’s face or your eyes move, a new pattern comes over the optic nerve. In the visual input area of your cortex, called V1, the pattern of activity is also different each time anything in your visual field moves, but several levels up in the hierarchy of the visual system, in your facial recognition area, there are neurons which remain active as long as your friend’s face is in your visual field, at any angle, in any light, and no matter what makeup she’s wearing. And this type of invariant representation is not limited to the visual system but is a property of every sensory and cortical system. So how is this invariant representation accomplished?

———————–

I’m sorry, but unfortunately, I have once again run out of time and space and must continue this column next time. Despite my attempts at presenting Hawkins’s theory as concisely as possible, it is not possible to condense it further without losing essential parts of it and there’s still quite a bit left, and so I must (reluctantly) write a Part III to this column in which I will present Hawkins’s account of how invariant representations are implemented, how memories are used to make predictions (the essence of intelligence), and how all this is implemented in hierarchical layers in the actual cortex of the brain. Look for it on May 8th. Happy Monday, and have a good week!

NOTE: Part III is here. My other Monday Musing columns can be found here.

Monday, April 10, 2006

Old Bev: POP! Culture

53063_2The cover of this week’s STAR Magazine features photos of Katie Holmes, Gwyneth Paltrow, Brooke Shields, Angelina Jolie, and Gwen Stefani (all heavily pregnant) and the yellow headline “Ready to POP!”  Each pregnancy, according to Star, is in some way catastrophic – Katie’s dreading her silent Scientology birth, Gwyneth drank a beer the other night, Brooke fears suffering a second bout of depression, Angelina’s daring to dump her partner, and Gwen’s thinking of leaving show business.  They seem infected, confused, in danger of combustion.  “I can’t believe they’re all pregnant all at the same time!” exclaimed the cashier at Walgreen’s as she rung up my purchases, as if these women were actually in the same family, or linked by something other than fame and success.  The cover of Star suggests that these ladies have literally swollen too big for their own good.

Edwards_1Britney Spears’ pregnancy last summer kicked off this particular craze of the celebrity glossy.  Each move she made, potato chip she ate, insult tossed toward Kevin, all of it was front page pregnancy news for Star and its competitors.  “TWINS?!” screamed one cover, referencing her ballooning weight. It was coverage like this that inspired Daniel Edwards’ latest sculpture, “Monument to Pro-Life: The Birth of Sean Preston,” though from his perspective the media’s take on the pregnancy was unilaterally positive.  When asked why it was Britney Spears whom he chose to depict giving birth naked and on all fours on a bear skin rug, he replied, “It had to be Britney.  She was the one.  I’d never seen such a celebrated pregnancy…and I wanted to explore why the public was so interested.”

Predictably, the sculpture has attracted a fair amount of coverage in the last few weeks, most of it in the “news of the weird” category. The owners of the Capla Kesting Fine Art Gallery have made much of the title of the piece, taking the opportunity to include in the exhibit a collection of Pro-Life materials, announcing plans for tight security at the opening,  and publicizing their goal of finding an appropriate permanent display for the work by Mother’s Day.  Edwards states that he’s undecided on the abortion issue, Britney has yet to comment on the work, and the Pro-Lifers aren’t exactly welcoming the statue into their canon.  For all of the media flap, I was expecting more of a crowd at Friday’s opening (we numbered only about 30 when the exhibit opened), and a much less compelling sculpture.

Front_3My initial reaction to photos of “Monument to Pro-Life” was that Britney’s in a position that most would sooner associate with getting pregnant than with giving birth.  Edwards, I thought, was invoking the pro-life movement as a way to protest the divorce of the sex act from reproduction. But in person, in three dimensions and life-size, the sculpture demands that the trite interpretations be dropped.  It’s a curious and exploratory work, and I urge you to go and see it if you can, rather than depend on the photos.  Unlike the pregnant women of STAR, the woman in “Monument to Pro-Life” isn’t in crisis.  She easily dominated the Capla-Kesting gallery (really a garage), and made silly the hoaky blue “It’s a Boy!” balloons hovering around the ceiling.  To photograph the case of pro-life materials in the corner I had to ask about five people to move – they were standing with their backs to it, staring at the sculpture.  The case’s connection to the work was flimsy, sloppy, more meaningful in print than in person.

Yes, Edwards called the piece “Monument to Pro-Life: The Birth of Sean Preston,” but I think the title aims less to signal a political allegiance than to explore the rhetoric of the abortion debate.  Birth isn’t among the usual images associated with the pro-life movement. Teeny babies, smiling children, bloody fetuses are usual, but I’ve never seen a birth depicted on the side of a van.  Pro-life propaganda is meant to emphasize the life in jeopardy – put a smiling toddler on a pro-life poster, and you’re saying to the viewer, you would kill this girl?  The bloody fetus screams, you killed this girl.  The images are meant to locate personal responsibility in the viewer.  But a birth image involves a mother, allows a displacement of that responsibility.  A birth image invokes contexts outside of the viewer’s frame of reference (but maybe she was raped! Maybe she already has four kids and no job!  Maybe she’s thirteen!), and forces the viewer to pass judgment on the mother in question.  Not all pro-lifers, not by any means, wish to punish or humiliate those women who abort their pregnancies. The preemies and toddlers and fetuses serve to inspire a protection impulse, and the more isolated those figures are from their mothers (who demand protection), the simpler the argument. Standard pro-life propaganda avoids birth images in order to isolate that protective impulse, and narrow the guilt.

Of course, the mother in this birth image has a prescribed context.  Britney Spears, according to Edwards, has made the unusual and brave choice to start a family at the height of her career, at the young age of 24.  For him, the recontextualization of “Pro-Life” seems to be not just about childbirth, but about childbirth’s relationship to ‘anti-family’ concepts of female career.  Edwards celebrates the birth of Sean Preston because of when Sean Preston was born, and to whom.  Unlike STAR, which depicts the pregnancies of successful women as dangerous grabs for more, Edwards depicts Britney’s pregnancy as a venerable retreat back to womanhood.  The image/argument would be more convincing, however, if the sculpture looked more like Britney, and if Britney was a better representative of the 24-year-old career woman. It doesn’t (the photos don’t conceal an in-person resemblance), and she isn’t (already the woman has released a greatest hits album).  Edwards would have been better served had Capla Kesting displayed a case of Britney iconography along side the statue if he wished his audience to contemplate her decision.  But the sculpture is perfectly compelling even outside of the Britney context.

BackStandard pro-life rhetoric is preoccupied by transition, the magic moment of conception when ‘life begins.’  Edwards too focuses on transition, but at the other end of the pregnancy.  Sean Preston, qualified as male only by the title, is frozen just as he crowns.  He has yet to open his eyes to the world, but the viewer, unlike his mother, can see him. Many midwives and caregivers discourage childbirth in this position (hands and knees) because, though it is easy on the mother’s back and protects against perineal tearing, it is difficult to anticipate the baby’s arrival.  It’s a method of delivery that a mother should not attempt alone. The viewer of “Monument to Pro-Life” is necessarily implicated in the birth, assigned responsibility for the safe delivery of Sean Preston.

You’ve got to be up close to see this, though.  As I left the gallery, walked up North 5th to Roebling, a 60-something woman in a chic black coat stopped me.  “Who’s the artist?” she asked.  “Who is it that’s getting all the attention?”  I told her it was Daniel Edwards, but that the news trucks were there because it was a sculpture of Britney Spears giving birth on all fours.  Her eyebrows raised.  “You know, I thought it was very pornographic,” she offered, and I glanced back at Capla Kesting.  And from across the street, it did look like a sex show.

It’s a tricky game Daniel Edwards is playing.  On the one hand, “Monument to Pro-Life” is a fairly complicated (and exploitive) work; on the other, it’s a fairly boring (and exploitive) conduit of interest cultivated by STAR and the pro-life movement.  Unfortunately for Edwards, the media machine that inspired his work doesn’t quite convey it in full – the AP photograph of the sculpture doesn’t show her raised hips, and forget about Sean Preston crowning. However, the STAR website does have a mention of the sculpture, and a poll beneath the article for readers to express their opinions.  The questions: “Is it a smart thing for pregnant-again Britney Spears, who gave birth to son Sean Preston just 6 months ago, to have another child so soon after giving birth?” and “Can Britney make a successful comeback as a singer?”

Philip Larkin: Hull-Haven

Australian poet and author Peter Nicholson writes 3QD’s Poetry and Culture column (see other columns here). There is an introduction to his work at peternicholson.com.au and at the NLA.

Philiplarkin200x280_1For Gerard Manley Hopkins there was Heaven-haven, when a nun takes the veil, and perhaps a poet-priest seeks refuge, but for Philip Larkin there is no heaven. There is Hull, and that is where Larkin, largely free of metropolitan London’s seductions, finds his poetry and his poetics. Old chum Kingsley, it seems, can do his living for him there. But Larkin has more than two strings to his bow too, which awkward last meetings around the death bed show only too plainly.

Now that the usual attempts at deconstruction have almost run their course, the time has come to look at the work left. Pulling people off their plinth is a lifetime task for some who never get around to understanding that some writers say more, and more memorably, than they can ever do. Also, they don’t seem to understand that writers are just like everyone else, only with the inexplicable gift, which the said writer understands least of all, knowing that the gift, bestowed by the Muse, can depart in high dudgeon without notice. Larkin knew this, and lamented the silences of his later years.

Silence does seem to wait through his poems. They bleakly open to morning light, discover the world’s apparent heartlessness, then close with a dying fall. Occasionally ‘long lion days’ blaze, but the usual note is meditative, and sometimes grubby. What mum and dad do to you has to be lived out in extenso. Diary entries are too terrible to be seen and must be shredded. Bonfires and shreddings have a noble tradition in the history of literature. What would we have done if we had Byron’s memoirs and we were Murray and the fireplace waited?

Strange harmonies of contrasts are the usual thing in art. So if Larkin proclaimed racist sentiments in letters yet spent a lifetime in awe of jazz greats, or ran an effective university library whilst thinking ‘Beyond all this, the wish to be alone’ (‘Wants’), that is the doubleness we are all prone to. For artists there always seems to be the finger pointing, whereby perfection is expected of the artist but never required by the critic. Larkin is seen as squalid, not modern, provincial, by some. For others there are no problems. He says what they feel, and says it plainly.

If Larkin doesn’t have a mind like Emily Dickinson’s—who does—or scorns the Europeans, these are not, in themselves, things that limit the reach of his poetic. Larkin’s modest Collected Poems stands in distinct contrast to silverfish-squashing tomes groaning with overwriting. Larkin is a little like Roethke in that way. Every poem is precise, musical, clear. How infuriating it is that people do not follow artists’ wishes and publish poems never meant to see the light of day. There is a great virtue in Larkin’s kind of selectivity. Capitalism seems to require overproduction of product, and many poets have been happy to oblige. But this surfeit does the poet no long-term favours and usually ensures a partial, or total, oblivion. Tennyson and Wordsworth are great poets who clearly have survived oblivion, but who now reads through all of ‘Idylls of the King’ or ‘The Prelude’.

Larkin’s version of pastoral has its rubbish and cancer, sometimes its beautiful, clear light, its faltering perception of bliss, usually in others. Doubts about the whole poetic project surface occasionally, and what poet doesn’t empathise with that. How easy jazz improvisation seems in comparison to getting poems out and about. No doubt the improvisation comes only after mastery, control. Then comes the apparently spontaneous letting go. But the poet doesn’t see that. He/she is left with the rough bagging of words to get the music through. Larkin’s music is sedate, in the minor key. Wonder amongst daffodils or joy amongst skylarks are pleasures that always seem just over the hill, or flowing round a bend in the Humber as one gets to the embankment. Street lights seem like talismans of death, obelisks marking out seconds, hours, days, years, eternity. Work is a toad crushing you.

A great poet? The comparison with Hopkins is instructive. Hopkins makes us feel the beauty of nature, he makes us confront God’s apparent absence in the dark, or “terrible”, sonnets. It is committed writing in the best sense The language heaves into dense music, sometimes too dense, but you always feel engaged by his best poetry. Larkin is dubious about the whole life show. The world is seen from behind glass, whiskey to hand, or in empty churches, or from windswept plains, sediment, frost or fog lapping at footfall. Hopkins loves his poplar trees; his kingfishers catch fire; weeds shoot long and lovely and lush. Grief and joy bring the great moments of insight and expression, and thus the memorability.

The case of Larkin does raise a fundamental concern regarding art and its place in society. When the upward trudge of aesthetic idealism meets the downward avalanche of political and social reality, what is the aesthetic and political fallout. With Larkin it appears to be a stoic acceptance of status quo nihilism—waiting for the doctor, then oblivion. With Celan, one cannot get further than the Holocaust. For others, a crow is an image of violence, or tulips are weighted with lead. No longer are these images of natural beauty. No doubt, for those who have just seen a contemporary exhibition at Gagosian or been reading about the latest horrors in Darfur, Larkin could seem hopelessly out of touch, and self-pitying to boot. That is not a sensible way of looking at culture. Looking for political correctness in art always leads to disappointment.

Larkin seems to fill the expectations required by late-twentieth century English aesthetics, but I wonder. When younger, I thought Stravinsky the greatest composer of the century I was born into. Now it is Rachmaninov and Prokofiev who give me more pleasure. And I find them no less ‘great’. Robert Lowell seemed the representative poet of his generation when I was at university. Now some of the work reads to me like a bad lithium trip. Does this signify cultural sclerosis on my part? We can’t have a bar of Wagner’s anti-Semitism, but that still leaves the fact of Wagner’s greatness to be confronted. The achievement is so enormous. To use a somewhat dangerous and controversial term of the moment, it shows more than intelligent design. Appeals to the Zeitgeist, a somewhat unreliable indicator of artistic excellence, are last resorts for those who like to give their critiques an apparently incontrovertible seal of approval. In the interim, culture remains dynamic and reputations sink or swim depending on factors having very little to do with intrinsic value.

In Hull Larkin found his haven, the world held warily at bay. However, the world cannot be held at bay for long. The general public want their pound of flesh, and they will take it. Hopkins’ divided soul has passed through mercy, and mercilessness, to a Parnassian plateau. Larkin has entered upon his interregnum, where an uncertain reckoning now takes shape.

The following is the first part of a two-part poem, ‘Larkin Land’, written in 1993.

P03p37           Larkin Letters 

Perhaps this sifted life is right—
The best of him was poetry
Bearing acid vowels
In catalogued soliloquy,

Where art’s unspent revisions
Would liberate, restore;
Trapped in a bone enigma
Ideals could still creep through.

A fifty-dollar lettered life
Can’t give you all the facts.
When one has got a poem just right
Awkward prose seems second-best.

Judgment is mute
When words come from pain—
Beside fierce Glenlivet
These civilised spines

Stare past the face
Of a thousand-year spite;
Annexed by form,
Poems survive the killing night.

So, at end, the cost of verse
Is paid for with this strife;—
Though not asked for, given,
This England mirrored into life.

Written 1993

Monday Musing: Al Andalus and Hapsburg Austria

One probably apocryphal story of the Alhambra tells of how Emir Al Hamar of Gharnatah (Granada) decides to begin the undertaking. One night in the early 13th century, Al Hamar has a dream that the Muslims would be forced to leave Spain. He takes the dream to be prophetic and, more importantly, to be the will of God. But he decides that if the Muslims are to leave Spain, then they would leave a testament to their presence in Spain, to al Andalus. So Al Hamar begins the project (finished by his descendants) that would result in one of the world’s most beautiful palaces, the Alhambra. Muslim Spain was still in its Golden Age at this point, but also just two and a half centuries before the expulsion/reconquest. The peak of the Golden Age had probably passed, with its most commonly suggested moment coinciding with the life of the philosopher ibn Rushd, or Averroes (1126-1198 C.E.).

300pxgranada99towardsalhambra

Muslim Spain plays an interesting role in different contemporary political imaginations. For Muslim reformers, it is an image of a progressive, forward looking and tolerant period in Islam, where thinkers such as ibn Rushd could assert the primacy of reason over revelation. For radical Islamists, it’s a symbol of Islam at the peak of its geopolitical power. For conservatives in the West it is a chapter in an off-again, on-again clash of civilizations. For Western progressives, it is an image of a noble, pre-modern multiculturalism tolerant of Christians and Jews. That is, for the contemporary imagination, it has become the political equivalent of a Rorschach.

250pxboabdilferdinandisabella

I see no reason why I should be different in my treatment of Al Andalus (In all honesty, I react fairly badly, I cringe, when people speak of past cultures and civilizations as idyllic, free of conflict, and held together by honor, duty, and understanding. The only thing I’ve ever been nostalgic for is futurism.) Morgan’s post last Monday on Joseph Roth reminded me of Andalusian Spain, of all things.

The Hapsburg Empire is the other Rorschach for the imagination of political history. The Austro-Hungarian Empire carries far less baggage from their involvement with the present than Andalusia does, but it certainly suffered its fair share. The break up of the Soviet Empire and the unleashing of “pent up” or “frustrated” national aspiration had many looking to the Hapsburgs as a model of a noble, pre-modern multiculturalism.

My projection onto these inkblots of history is something altogether different. In the changing borders and bibliographies of Andalusian and Austrian history, I see societies that reach a cultural and intellectual peak as (or is it because?) they are overcome with panic about the end of their world. A “merry” or “gay apocalypse”, is how Hermann Broch, the author of the not so merry but apocalyptic Death of Virgil, described the period. This sentiment echoes not just in literature but even in a book as systematic as Karl Polyani’s The Great Transformation. Somehow it’s clear, Karl Kraus’ Grumbler, the pessimistic commentator who watches the world go mad and then be annihilated by the cosmos as punishment for the world war in The Last Days of Mankind, was lying in wait long before the catastrophe, that is, during the Golden Age itself.

The early 13th century was hardly a trough for the Moors in Spain, just as the period before World War I was not a cultural malaise for the Austrians, or the rest of Europe for that matter. Quite the contrary. If there is an image that these societies evoke, it is feverish activity, even if it’s not the image that, say, comes across in Robert Musil’s endless description of the society, The Man Without Qualities. Broch would write himself to death in some bizarre twist on Scheherazade.

180pxkarl_kraus_1914

The inscriptions on the Alhambra, such as “Wa la ghalib illa Allah” (“There is no conqueror but God”), are written in soft stone. They have to be replaced, and thereby they require the engagement of the civilization that is to succeed the Moors. Quite an act of faith. While it may be the case that some such as Kraus (or Stefan Zweig) expected the end of all civilization, Austrian thought and writing of the era show a similar faith despite the Anschluss. Admittedly, you have to really look for it. And it certainly did export some of the better minds of the time—including Broch, Polyani, Karl Popper, and Friedrich von Hayek, albeit for reasons of horror and that are to its shame.

It is harder to know what to make of these civilizations, for which an awareness or expectation of their end spurs many of their greatest achievements. There aren’t too many of them. They have in common the fact that they are remembered for relative tolerance, but that could just be a prerequisite to flourish in the first place. Their appeal is, however, clear—as close to an image a society can have of creating, thinking and engaging, even through despair, some way to survive the apocalypse.

Happy Monday.

Random Walks: Past Perfect

I’m a huge fan of the Japanese anime series Fullmetal Alchemist, a bizarre, multi-faceted mix of screwball comedy, heartfelt pathos, and gut-wrenching tragedy — not to mention rich metaphorical textures. It’s the story of two brothers, Edward and Alphonse Elric, who lead an idyllic existence, despite their alchemist/father’s prolonged absence because of an ongoing war. Then their mother unexpectedly dies. Devastated by their loss, with no word from their father and no idea of where he might be, the two brothers take matters into their own hands. They attempt an alchemical resurrection spell — the greatest taboo in their fictional world — to raise her from the dead. They pay an enormous price for their folly: Edward loses an arm and a leg, while Alphonse loses his entire body; his soul only remains because Edward managed to attach it to a suit of armor. The story arc of the series follows the brothers as they roam the countryside, searching for a mythical Philosopher’s Stone with the power to undo the damage and restore their physical bodies.

The series touches on so many universal human themes, but for me the most poignant is the fact that the brothers’ lives are destroyed in a single shattering event over which they have no control: the death of their beloved mother. I’ve been ruminating on this notion of world-shattering of late because this month marks the 100th anniversary of the great earthquake of 1906 that essentially leveled the city of San Francisco, which had the misfortune of being located right at the quake’s epicenter. The shocks were felt from southern Oregon down to just south of Los Angeles, and as far inland as central Nevada, but most of the structural damage and the death — perhaps as many as 3000 lives lost — occurred in the Bay Area. The carefully constructed worlds of tens of thousands of people were literally shattered in just under a minute.Damage6

Like the Brothers Elric, until that fateful morning, San Francisco basked in the glow of its successful transition from tiny frontier town to a thriving, culturally diverse metropolis. The city benefited greatly from the California Gold Rush, as miners flocked there in search of (ahem) “entertainment,” and to stock up on basic supplies before returning to their prospecting. A few lucky ones struck it rich and opted to settle there permanently. The population exploded, so much so that by the 1850s, the earlier rough-and-tumble atmosphere was limited to certain lower-income areas. Elsewhere, theaters, shops and restaurants flourished, earning San Francisco the moniker, “the Paris of the West.”

In 1906, big-name stars like the actress Sarah Bernhard and famed tenor Enrico Caruso performed regularly in the city’s theaters. A local restaurant called Coppa’s was the preferred hangout for a new breed of young Bohemians: intellectuals, artists, and writers like Frank Norris and Jack London. A recent NPR tribute to the thriving arts scene of that time revealed a fascinating historical tidbit: one of the (apparently depressed) regulars at Coppa’s had scrawled a warning on the wall: “Something terrible is going to happen.”

On April 18th, something terrible did happen: the city was rocked by violent tremors in the wee hours of the morning. Emma Burke, the wife of a prominent attorney, recalled in a memoir (part of a fascinating online collection of documents at the Virtual Museum of the City of San Francisco),”The floor moved like short choppy waves of the sea, criss-crossed by a tide as mighty as themselves. The ceiling responded to all the angles of the floor…. How a building could stand such motion and keep its frame intact is still a mystery to me.” Not all buildings remained intact; roofs caved in, and chimneys collapsed. People ran into the streets, fearing to remain in their unstable homes, and thousands camped out in Golden Gate Park. Making the best of a bad situation, some people adorned their crude tents and shelters with handmade signs: “Excelsior Hotel,” “The Ritz,” or “The Little St. Francis.” The Mechanics’ Pavilion became a makeshift hospital, with some 200 patients lying on rows of mattresses on the floor, awaiting transport to Harbor Emergency Hospital.

Despite the devastation, the city might yet have survived, structurally, were it not for the fires that broke out. In Fullmetal Alchemist, the Elric brothers make their situation worse by attempting a taboo resurrection, ignorant of the price that would be exacted. Similarly, some quake survivors attempted to start morning fires, not realizing the danger of their ruined chimneys. Worse, the quake had destroyed the water mains, making it difficult to douse the flames. The fires raged out of control for days; the firefighters had to resort to dynamiting entire blocks in advance of the flames, hoping to create a breach over which the fires couldn’t leap. It wasn’t the most effective method, and by the time the fires were quenched, most people had lost everything, and very few structures remained standing. Many accounts of those who survived speak of the flames burning so brightly that night seemed almost like day. Portrait photographer Arnold Genthe recalled in his own memoir, “All along the skyline, as afar as the eye could see, clouds of smoke and flames were bursting forth.”

We owe a great historical debt to Genthe, who provided a photographic record of the events for posterity. Within a few hours, he had snagged a small 3A Kodak Special camera from a local dealer whose shop had been seriously damaged by the quake, stuffed as many rolls of film into his pockets as he could manage, and spent the entire day photographing various scenes of the disaster, blissfully unaware that the fires would soon destroy all his material possessions.Sfburning

Among Genthe’s more amusing anecdotes is his recollection of bumping into Caruso — who had performed in Carmen the night before at the Mission Opera House — outside the Francis Drake Hotel, one of the few structures that had not been severely damaged by the quake. The proprietors were generously handing out free coffee, bread and butter to the assembled refugees. The great tenor had been forced to abandon his luxury suite clad only in his pajamas, with a fur coat thrown over for warmth. He was smoking agitatedly and muttering to himself, “‘Ell of a place! ‘Ell of a place! I never come back here!” (Genthe wryly observes, “And he never did.”)

Caruso’s loyal valet eventually secured a horse and cart to transport his master out of the disaster area. Others soon followed suit in a mass exodus to escape the flames; thousands streamed toward the ferries waiting to take them across the bay to safety. They fled on foot, carrying whatever salvaged belongings they could manage, or transporting them on various makeshift vehicles: baby carriages, toy wagons, boxes mounted on wheels, trunks placed on roller skates. Genthe recalled seeing two men pushing a sofa on casters, their possessions piled on top of the furniture. He claimed to never forget “the rumbling noise of the trunks drawn along the sidewalks, a sound to which the detonations of the blasting furnished a fitting contrapuntal accompaniment.”

For all the tragic plot points in Fullmetal Alchemist, as much as the Elric brothers continue to suffer, there are still moments of humor, sweetness, and evidence of the elasticity of the human spirit. The residents of San Francisco were no exception. “I never saw one person crying,” Emma Burke recalled. Indeed, the disaster seemed to bring out the best in people, with rich and poor standing on line at relief stations to receive daily rations, and people sharing the few resources they had with those around them, regardless of race or class. Anyone with a car used their vehicle to transport the wounded and dead to hospitals and morgues, respectively. Emma Burke recalled one chauffeur who “ran his auto for 48 hours without rest,” and George Blumer, a local doctor, ran himself ragged for more than week tending to the sick and wounded all over town. There was also a distinct lack of self pity; most people seemed resigned to their plight, accepting the hand Nature had unexpectedly dealt them. Nobody ever said the world was perfect.

That’s not just an aphorism; current scientific thought bears it out. The universe isn’t perfect, although some string theorists believe in the concept of “supersymmetry”: a very brief period of time in which our cosmos was a perfectly symmetrical ten-dimensional universe, with all four fundamental forces unified at unimaginably high energies. But that universe was also highly unstable and cracked in two, sending an immense shock wave reverberating through the fabric of space-time. There may be two separate space-times: the one we know and love, with three dimensions of space and one dimension of time, and another with six dimensions, too small to be detected even with our most cutting-edge instruments. And as our four-dimensional universe expanded and cooled, the four fundamental forces split off one by one, starting with gravity. Everything we see around us today is a mere shard of that original ten-dimensional perfection. Supersymmetry is broken.

Physicists aren’t sure why it happened, but they suspect it might be due to the incredible tension and high energy required to maintain a supersymmetric state. And on a less cosmic scale, symmetry breaking appears to be a crucial component in many basic physical processes, including simple phase transitions: for instance, the critical temperature/pressure point where water turns into ice. It seems that some kind of symmetry breaking is woven into every aspect of our existence.

Paradoxically, shattered symmetries may have made our material world possible. In the earliest days of our universe, there were constant high-energy collisions between particles and antiparticles (matter and antimatter). Because they had opposite charges, they would annihilate each other and produce a burst of radiation. There should have been equal numbers of each — except there wasn’t. At some point, matter gained the upper hand. All the great, beautiful, awe-inspiring structures we see in our universe today are the remnants of those early collisions — the few surviving particles of matter.

The same is true of time. Theoretically, time should flow in both directions. But on our macroscopic level, time runs in one direction: forward. Drop a glass so that it shatters on the floor, and that glass won’t magically reassemble. What’s done cannot be undone. We can’t freeze a perfect moment, but the very impermanence of that perfection is what makes it meaningful.

For all the devastation it wreaks, shattered symmetry also gives the opportunity for rebuilding. Merely a month after the San Francisco earthquake, Sarah Bernhardt performed Phaedre, free of charge, for more than 5000 survivors at the Hearst Greek Theater at the University of California, Berkeley. Other performers followed suit (except for the traumatized Caruso), and within four years, many of the theaters had been rebuilt. In 1910, opera star Louisa Tetrazzini gave a free concert downtown to celebrate the city’s revival. The disaster also laid the foundation for modern seismology, specifically the elastic rebound theory developed by H.F. Reid, a professor at Johns Hopkins University. He attributed the cause of earthquakes to sliding tectonic plates located around fault lines; before then, scientists thought that fault lines were caused by quakes.

One of the Major Arcana cards in the traditional tarot deck is the Tower, depicting sudden, violent devastation that causes the once-impressive edifice to crumble, its symmetry utterly destroyed as it is reduced to rubble. It wouldn’t be described as an especially fortuitous card. But out of the Tower’s rubble comes an opportunity to rebuild everything from scratch, just like the violent environment of our baby universe eventually produced breathtaking celestial beauty. Change is built into the very mechanisms of the cosmos. Like the early supersymmetric universe, perfection is a static and unnatural state that cannot — and probably should not — be maintained. Observes Edward’s mentor, Roy Mustang (a.k.a. the Flame Alchemist), “There is no such thing as perfection. The world itself is imperfect. That’s what makes it so beautiful.”

Below the Fold: Collapsing General Motors and the Dying American Dream, or Washington Fizzles while Detroit Burns

The Leviathan of American capitalism is dying. And what is bad for General Motors is bad for America. But few, aside from Wall Street arbitrageurs casting lots over the firm’s remains, seem to care.

59caddy1General Motors from almost every vantage point was the instrument of the post-World War II American Dream. Peter Drucker’s rigorous analysis of Alfred P. Sloan’s GM empire fueled the development of modern business management theory. Walter Reuther and the United Auto Workers played the part of the exemplary progressive union, driving General Motors into becoming the national sponsor of a business-based welfare capitalism for workers. Guaranteed annual incomes, annual productivity raises, cost of living allowances, health insurance, and corporate-guaranteed pensions, in addition to good wages, were the fruits of forty years of conflict and cooperation between union and the great Goliath. Each side, it can be said in retrospect, exceeded expectations in moving forward the frontiers of collective bargaining to include an American dream for all. Reuther even dared try to negotiate car prices to make cheap transport available to American workers. Charles Wilson, the GM head famous for the “what’s good for General Motors” phrase took progressive business unionism to its heights, sponsoring the first cost of living wage increase clause in the belief that workers needed protection against the wage erosions of inflation.

Good wages, welfare state, and a Chevy under the carport were all made possible for millions of American workers by the unlikely alliance of General Motors and the United Auto Workers. In 1948, only half of all American households owned a car; by 1968, 80% did. Several million other American workers got roughly the same deal pioneered in Detroit.

And there were piles of profits. According to the labor historian John Barnard, Detroit automakers between 1947 and 1967 were getting a 17% annual return on their capital, twice as great as that of any other manufacturing sector. Between 1947 and 1969, automakers earned $35 billion in profits, an astonishing sum in yesterday’s dollars. From the end of the war to the end of American industry’s “golden age” in 1972, the Big Three made over 200 million cars and trucks.

And then the wheels began to come off. Oil crises, recessions, inflation, and the corporate inability to copy Japanese innovations in total quality control started the downward spiral in which General Motors, and to a lesser extent, Ford, find themselves caught up today. Toyota will surpass GM as the world’s largest car producer this year, while Toyota and Honda combined now out-produce GM in America. General Motors now loses $2300 per vehicle; Toyota makes $1500 per vehicle. Each GM vehicle carries $1500 in health care costs, $1300 more per vehicle than a US-made Toyota.

GM lost over $10 billion last year, and has offered to buy out 30,000 of its 113,000 blue-collar workers in the coming year. Thousands of white-collar workers are being severed without any generous terms attached. The firm is selling off a majority interest in its lucrative finance arm, General Motors Acceptance Corporation, as well as much of its holdings in several Japanese vehicle manufacturers.

There are two basic causes for decline. First, GM runs less efficient production lines, taking 34 hours to make a vehicle to Toyota’s 28. Instead of closing the gap, GM is falling further behind, as Toyota is making faster efficiency gains than GM. GM operates at 85% capacity, while Toyota is running at 107% capacity. Coupled with this management failing, secondly, is that while US Toyota’s hourly wages are only 13% lower than GM’s, Toyota’s labor force is smaller, younger, and healthier. Toyota, having only begun producing vehicles in the United States in 1986, also has but a handful of retirees – 1600 to be precise. In contrast, GM has 460,000 retirees whose needs along with those of their families raise the total hourly labor cost for a GM worker to $73, an amount 52% more than an hourly worker cost US Toyota.

The road to car hell for GM is no doubt paved with bad decisions like buying SAAB which continues to go its own way (down); investing in FIAT, and then having to bribe FIAT to avoid having to buy the all-but bankrupt firm; pushing gas guzzling SUVs right into the face of a predictable oil price rise; and missing the hybrid mini-boom. These are just the highlights. It is also hard to understand how management plans to shrink its American operations will enable it to raise more needed capital for investment and to support the pension and retiree health care costs that figure importantly in its unprofitability. One wonders whether the newly announced downsizing is the first step in a business plan that includes eventual bankruptcy, whose proceedings might offer the company the opportunity to shed retirees and their costs, and perhaps much of their employee liabilities altogether.

GM, or for that matter Ford or the UAW, cannot be held responsible for the national indifference to their fate. In 1979, the federal government bailed out Chrysler, floating bonds that allowed the firm to invest in new products, plants, and technology. No hint of a repeat thus far. Nothing more at this point than a letter from two members of Congress to Delphi, parts manufacturer, former GM subsidiary, and key contributor to the GM fiscal mess, urging the firm to engage in good faith bargaining with its unions. Another two members of Congress have filed a bill to prevent a firm like Delphi from dumping labor agreements in bankruptcy court while providing bonuses for bosses and shifting corporate money into offshore accounts.

Why no more than a muffle from Congress? Why silence from the Executive? In part, because saving General Motors and securing its workers would run against the prevailing economic orthodoxy of our time. If General Motors cannot be competitive, whisper the market-mentalists, then to the others should go the spoils of the American auto economy. If Toyota workers in Tennessee are more productive than General Motors workers in Detroit, then, according to dogma, our economy will function more efficiently with more Toyota workers and fewer General Motors workers. We, that elusive we, will be better off, market enthusiasts would tell us, however painful handling these externalities, those expensive retirees, their medical costs, and the medical costs of current workers turns out to be.

Why is bankruptcy the only tool in the kit today, and particularly a bankruptcy process increasingly adept at dispossessing workers and retirees of anything more than lower wages, benefits, and pensions? One reason among the many that is relevant here is that our ruling elite, blindly committed to a concept of free trade that is injurious to the workers in rich and poor countries alike, has kicked away the only real escape ladder for a massive economic and social problem of the sort faced by GM. Under the rules of the World Trade Organization (WTO), a bail-out of the firm would be considered an illegal subsidy violating the terms of the agreement. If the American political elite is going to fight to advance Boeing’s interests against the European Airbus by arguing that the Europeans are subsidizing Airbus, it would indelicate, indeed embarrassing and compromising to be subsidizing American car firms in their battles with Japanese, European, and Korean competitors at home.

Ah, and then another reason is adduced for Washington’s silence in the face of Detroit’s agony. Our elite has moved on: cars are so last century. The capacity of American firms to produce them is seen as of marginal significance in the desire to achieve economic mastery of the world. Information, banking, finance, drugs, and biotechnology are tomorrow’s American advantage, and the WTO rules were fixed so that these industries could expand relatively easily world-wide. Aside from succumbing to quadrennial blackmail by farm bill and held hostage by agro-industrial interests, the only manufacturing industry the US elite aids is the military/defense sector that the government now supports to the tune of half a trillion dollars a year. In addition, the US government provides the military industry with a staff of uniformed sales representatives from the Pentagon and an overseas finance bank that supports its sales. As C. Wright Mills observed fifty years ago, the unholy mixture of the military, politicians, and corporations producing the weapons of war is the basis for the modern American power elite’s political regimes.

Perhaps the Big Three should have stayed in tanks and planes after World War II. Think of the profit margins and political protection they would be enjoying now. Think how an Abrams tank production line would have clarified the elite mind on the matter of saving General Motors.

Also exposed by Washington’s silence is that the elite wants to avoid picking up the tab for the corporate welfare state that General Motors, the United Auto Workers, Ford and Chryslers have built. General Motors has a single-payer health care system: why not simply federalize it? And those of the others? Why not assume the pension systems of the Big Three, ensuring that workers would be paid dollar for dollar what they expected, while using the government’s bonding authority to stretch out the companies’ liabilities?

Of course, our gang’s problem is how the government could help GM, the Auto Workers, Ford, and Chrysler without extending protections to the rest of us. They could be caught in a tricky game because equity issues could trigger an avalanche of resentment on the part of the rest of us, just as easily as it could salve the wounds of a sick corporation and its workers, past and present.

Readers, please take notice. First, this is no plea for economic nationalism. If Toyota USA faced the same problems, the same remedies would apply. It is about workers and maintaining a decent way of life. Second, this is no plea for trade protection. No barriers to trade are recommended. However, a state that ignores the nation’s economy, fails to regulate firms of whatever national origin in the interests of working people, and refuses to pick up the tab for the basic needs of its citizens contemplates both misery and revolt.

Monday, April 3, 2006

Monday Musing: The Palm Pilot and the Human Brain

Today I would like to explain something scientists know well: how computers work, and then use some of the conceptual insights from that discussion to present an interesting recent model of something relatively unknown: how human brains might work. (This model is due to Jeff Hawkins, the inventor of the Palm Pilot–a type of computer, hence the title of this essay.) This may well be rather too-ambitious a task, but oh, well, let’s see how it goes…

Part I: How Computers Work

Screenhunter_5_1Far too few people understand how computers operate. Many professional computer programmers, even, would be hard-pressed to explain the workings of the actual hardware, and may well never have heard of an NPN junction, while the average computer user certainly rarely bothers to wonder what goes on inside the CPU (Central Processing Unit, like Intel’s Pentium chip, for example) of her machine when she highlights a paragraph in MS Word and clicks on the “justify” button on the tool bar, and the right margin of the text is instantly and seemingly magically alligned. This lack of curiosity about an amazing technological achievement is inexplicable to me, and it is a shame because computers are extremely beautiful in their complex multi-layered structure. How is it that a bunch of electrons moving around on a tiny silicon wafer deep inside your machine manages to right-justify your text, calculate your taxes, demonstrate a mathematical proof, model a weather-system, and a million other things?

What’s equally weird to me is that I haven’t ever seen a short, comprehensive, and comprehensible explanation of how computers work, so I’m going to give you one. This isn’t going to be easy for me or for you, because computers are not trivial things, but I am hoping to provide a fairly detailed description, not just a bunch of confusing analogies. In other words, this is going to take some strenuous mental effort, and I encourage you to click on the links that I will try to provide, for further details and discussion of some of the topics I bring up. (The beginning part may be tedious for some of you who already know something about computers, but please bear with me.) Last preliminary comment: I will try to make this as simple as possible, but for those of you who don’t know extremely basic things like what electrons are, I really don’t know what to tell you, except that you should. (The humanities equivalent of this scientific ignorance might be someone who doesn’t know what, say, a sonnet is.) Oh, go ahead, click on “electrons.” I’ll wait.

——————–

Computers are organized hierarchically with layers of conceptual complexity built one on top of the other. This is similar to how our brains work. What I mean is the following: suppose my wife Margit asks me to go buy some bread. It is a simple enough instruction, and she can be fairly certain that a few minutes later I will return with the bread. Here’s what happens in my brain when I hear her request: I break it down into a series of smaller steps something like

Get bread: START

  1. Get money and apartment keys.
  2. Go to supermarket.
  3. Find bread.
  4. Pay for bread.
  5. Return with bread.
  6. END.

Each of these steps is then broken down into smaller steps. For example, “Go to supermarket” may be broken down as follows:

Go to supermaket: START

  1. Exit apartment.
  2. Walk downstairs.
  3. Turn left outside the building and walk until Broadway is reached.
  4. Make right on Broadway and walk one and a half blocks to supermarket.
  5. Make right into supermarket entrance.
  6. END.

Similarly, “Exit apartment” is broken down into:

Exit apartment: START

  1. Get up off couch.
  2. Walk forward three steps.
  3. Turn right and go down hallway until the front door.
  4. If door chain is on, undo it.
  5. Undo deadbolt lock on door.
  6. Open door.
  7. Step outside.
  8. END.

Well, you get the idea. Of course, “Get up off couch” translates into things like “Bend forward” and “Push down with legs to straighten body,” etc. “Bend forward” itself translates into a whole sequence of coordinated muscular contractions. Each muscle contraction is actually a series of biochemical events that take place in the nerve and muscle fibres, and you can continue breaking each step down in this manner to the molecular or atomic level. Notice that most of the action occurs below the threshhold of consciousness, with only the top couple of levels normally available to our conscious minds. Also, I have simplified the example in significant ways, most importantly by neglecting the role of memory retrieval and storage. (There are many retrievals involved here, such as remembering where the store is, where my apartment door is, and even how to walk!) Each subset of instructions in this example is what has come to be known as a subroutine. The beauty of this scheme is that once you have worked out the sequence of smaller steps needed to accomplish a repetitive task which is one level higher in the hierarchy, you can just store that sequence in memory, and you don’t ever need to work it out again. In other words, you can combine subroutines from a given layer into a subroutine which accomplishes some more general task in a higher layer. For example, one could combine the “Get bread” subroutine with the “Get newspaper” and “Get eggs” and “Get coffee” and “Drop off dry-cleaning” subroutines into a “Sunday morning chores” subroutine, which I might then do with little thought every Sunday morning.

This is how computers are able to do such extraordinary things. But I would like to explain some of the detail to you, and the best way to explain it is, I think, again by example. When a user highlights a paragraph of text and clicks on the justify button, here is some of what happens: a subroutine perhaps called “Justify right-hand margin” kicks in. (What that means is that control of the CPU is turned over to this subroutine.) This is what a primitive form of the subroutine might look like (in actual fact, many other things are taken into account) in what programmers call pseudo-code (an informal preliminary way of writing instructions–or laying out an algorithm–which are later carefully translated by the programmer into a higher-level computer language such as BASIC, FORTRAN, Pascal, or C):

Justify right-hand margin: START

  1. First determine the printed width of the text by subtracting the left margin position from the right.
  2. Build a line of text by getting the input (paragraph) text a word at a time. Test to see that the length of the text is less than the printed width.
  3. Output the first word with no following space.
  4. Determine the length of the remaining text, the available space, and the number of word spaces (the same as the remaining words). Divide to get the target word space. (Be sure to take into account the spaces in the string.)
  5. Output the word space, and the next word.
  6. Return to STEP 4 if there is more to print.
  7. Return to STEP 2 for the next line, until no more lines are left.
  8. END.

Of course, FORTRAN or Pascal, or C programmers don’t spend a lot of time actually writing the code (the actual “text” of higher level computer languages is called “code” and the part of programming which takes an algorithm such as the one given above and translates it into the particular syntax of a given language such as C, is called “coding”) for such things, because once they have been written by someone (anyone), they are put into libraries of subroutines and can subsequently be used by anyone needing to (in this case) justify text. Such libraries of useful subroutines are widely available to programmers.

Suppose this subroutine above were written in C. Now what happens to the C code? Who reads that? Well, the way it works is this: there is a program called a compiler, which takes the C code and each of it’s instructions, and breaks them down into a simpler language called assembly language. Assembly language is a limited set of instructions which can be understood by the hardware (the CPU) itself. It consists of instructions like ADD (a, b, c) which, on a given CPU might mean, “add the content of the memory location a to the content of memory location b and store the result in memory location c”. Different CPUs have different instruction sets (and therefore different assembly languages) but the same higher level language can be used on all of them. This is because a compiler for that type of CPU will translate the higher level language into the appropriate assembly language for that CPU. In this way, a program I have written in C to justify text can easily be ported over to a different computer (from a PC to a Mac, say) in the higher level language, without having to worry about how the lower levels accomplish their task. Are you with me? Reread this paragraph if you need to.

Actually, assembly language is itself translated by a program called an assembler into what is called machine language, which is simply a series of zeroes and ones that the hardware can “understand” and operate on. Now we get to the hardware itself. How does the hardware actually perform the instructions given to it? This time, let us start at the bottom of the hierarchy, the silicon itself, and build up from there. Stay with me now!

——————–

P-N Junctions

If certain types of impurities (boron, aluminum or gallium, for example) are added (called doping) to the semiconductor material, in this case silicon, it turns into what is known as P-type silicon. This type of silicon has deficiencies of valence electrons called “holes.” Another type of silicon which can be produced by doping it with different impurities (antimony, arsenic or phosphorous, for example) is known as N-type silicon, aScreenhunter_10nd this has an excess of free electrons, greatly increasing the intrinsic conductivity of the semiconductor material. The interesting thing about this is that one can place these two materials in contact with one another, and this “junction” behaves differently than either of the two types of silicon by itself. A P-N junction allows an electric current to flow in one direction, but not in the other. This device is known as a diode.

Transistors

A transistor is a device with three terminals (a triode–like the glass vacuum tubes of old): the base, the collector, and the emitter. In this device, the current flowing at the collector is controlled by the current between the base and the emitter. Screenhunter_4_3Transistors can be used as amplifiers, but more importantly in the case of computers, as switches. If you take two P-N junctions and combine them, creating a kind of sandwich, you get either an NPN junction or a PNP junction. These both then function as types of transistors, specifically bipolar junction transistors (BJT). In the case of the NPN junction type transistor, the N on one side acts as the emitter, the P is the base, and the other N is the collector. Refer to the diagram above and click here for more info about how exactly these work in terms of the underlying electronics.

Digital Logic Gates

Once we have transistors, we can do something very neat with them: we can combine them into what are known as logic gates. These are best explained by example. Imagine a device with two inputs and one output which behaves in the following way: if a voltage is applied to both inputs, the voltage is also present at the output, otherwise, the output remains at zero. (So if neither or only one of the inputs has a voltage present, the output is zero.) This is known as an AND gate, because its output is positive if and only if the first input AND the second input are “on.” (This “on” state of high voltage usually is used to represent the number 1, while no voScreenhunter_8ltage, or low voltage is used to represent the number 0.) Similarly, the output of an OR gate is “1” if either the first input OR the second input OR both of the inputs are “1”. (Still with me? Good. All kinds of exciting stuff is coming up.) A NOT gate simply reverses a 1 to a 0 and vice versa. There are other logic gates, but we won’t bother with them because they can all be simulated by combinations of something called a NAND gate. This is just an AND gate followed by a NOT gate. In other words its output is 0 only if both inputs are 1, otherwise its output is always 1. (See the “truth table” at the right. A and B are the inputs and X is the output.)

Screenhunter_9The really cool thing here is that one can combine two of the transistors discussed in the previous section to form a NAND gate. (See the diagram at right to see how they are connected together.) And as I mentioned before, NAND gates can then be connected together in ways that can simulate any kind of logic gate.

Not only that, there are ways of connecting NAND gates together to implement any binary digital function for which we can supply a truth table, with as many inputs and outputs as needed. This is known as digital logic and we can use it to, for example, add two binary numbers, each consisting of some fixed number of zeroes and ones. As I am sure you know, we can represent numbers in any base (including our usual decimal numbers) as binary numbers, so this is a very powerful way of manipulating numbers. In fact we can do many amazing things with these types of gates, including evaluating any statements of propositional logic. This is really the conceptual heart of computing.

Screenhunter_11_1By the way, the standard digital logic symbol of a NAND gate is shown here on the right. (The two inputs are on the left, the output is on the right.)

Now, you should have at least a rough idea of how we can use bits of silicon to do things like add and subtract binary numbers by using voltages to represent zeroes and ones. But what do we do with the result? In other words, where do we store things? This brings us to the other major component of computing: memory.

Flip-Flops

A flip-flop is a device which can be used to store one bit (a zero or a one) of information. Can you guess how flip-flops are made? Yep, you got it: following our procedure here of building things from stuff we discussed in the previous section, of course, this time we combine NAND gates in ingenious ways to construct them.

There are various types of flip-flops. A flip-flop usually has one or two inputs, an output, and an input from a clock signal (this is why computers must have clocks–and it is the speed of these clocks which is measured when you are told that your laptop runs at, say, 1.9 GigaHertz, which means in this case that the clock signal flips and flops between 0 and 1, back and forth, 1.9 billion times per second). I will here describe a simple type of flip-flop called an SR (or Set/Reset) flip-flop. This is how wikipedia describes it:

Screenhunter_12The “set/reset” flip-flop sets (i.e., changes its output to logic 1, or retains it if it’s already 1) if both the S (“set”) input is 1 and the R (“reset”) input is 0 when the clock is strobed. The flip-flop resets (i.e., changes its output to logic 0, or retains it if it’s already 0) if both the R (“clear”) input is 1 and the S (“set”) input is 0 when the clock is strobed. If both S and R are 0 when the clock is strobed, the output does not change. If, however, both S and R are 1 when the clock is strobed, no particular behavior is guaranteed. This is often written in the form of a truth table. [See the table at right.]

So, for example, if I want to store a 1 as the output of the flip-flop, I would put 1 on the S input and 0 on the R input. When the clock strobes (flips up to 1) the flip-flop will set to the 1 state as the output. I know this sounds confusing, but just reread it until you are convinced it works. Similarly I can reset it to the zero state by putting 1 on the S input and 0 on the R input. So how are these things constructed out of NAND gates?

I hereby present to you, the SR flip-flop in all its immense digital logic beauty:

Screenhunter_13_1

If you bought a computer recently it may well have come with one billion bytes of internal RAM memory. It takes eight flip-flops to hold one byte of information, and as you can see, it takes eight NAND gates to make this one basic flip-flop which will hold one bit in memory. That’s 128 transistors for a byte of data! Now you know why the original ENIAC computer, which functioned using vacuum tubes instead of transistors, filled a large hall. These days, we can put billions of transistors on a single small silicon chip. (There are ways to make this more efficient, my example is only for rough illustrative purposes.)

There are other flip flops (such as the JK flip-flop) which eliminate the uncertain state of the SR flip-flop when both inputs are 1. There are other improvements and efficiencies which I won’t get into here. (That would be like getting into the choice of materials for head-gaskets while trying to explain how an internal combustion engine works.)

——————–

Screenhunter_14So, starting with just simple bits of silicon, we have seen how we build layer upon conceptual layer (there are armies of engineers and scientists who specialize in each) until we have a processor which can perform arithmetic and logical functions, as well as a memory which can hold the results (or anything else). This is pretty much it! These are the elements which are used to design a machine language for a particular CPU (like the Pentium 5, say). And I have already described the software layers which sit on top of the hardware. I am sure it is obvious that there is much more (libraries-full) to every part of this (for example, I have said nothing about what an operating system does as part of the software layers), but broadly conceptually speaking, this is about it. If you have followed what I have laid out, you now know how electrons zipping about on little pieces of silicon right-justify your text, calculate your taxes, demonstrate a mathematical proof, model a weather-system, and the million other things computers can do.

I am running out of time and space in this column, so I will continue part II next time on April 17th. Look out for it then, and have a great week!

NOTE: Part II is here. My other Monday Musing columns can be found here.

Rx: Thalidomide and Cancer

Rock_brynnerRock Brynner, 54, historian, writer, former road manager for The Band and for Bob Dylan and son of the late actor Yul Brynner, knows both sides of the story of the drug thalidomide. In 1998, after suffering for five years from a rare immune disorder, pyoderma gangrenosum, Rock Brynner took thalidomide and went into remission. With Dr. Trent Stephens, he wrote “Dark Remedy” a history of thalidomide. “I didn’t write the book because I had taken thalidomide,” Mr. Brynner said. He looks and sounds very much like his famous father. “I did it as a historian because this was a story that needed telling.”

The story of thalidomide is not only worth telling, it has gotten substantially more exciting even since 2001 when the interview with Rock Brynner (RB) was reported in the New York Times by Claudia Dreifus (CD). The unique anti-inflammatory properties of thalidomide have been harnessed for treating diseases as varied as multiple sclerosis, arthritis, leprosy, a variety of cancers, AIDS, and many other chronic and debilitating illnesses such as p. gangrenosum that Mr. Rock Brynner (RB) suffers from. The story began in 1950’s. Since pathogens were considered to be the underlying cause of most human diseases, scientists were racing to find new antibiotics. At this time, an ex-Nazi officer, Heinrich Mückter became in-charge of the research program for the company Chemie Grünenthal, and working with Wilhelm Kunz, obtained what looked like a promising compound. Unfortunately, this new drug which they named thalidomide had no effect as an antibiotic, anti-histamine or anti-tumor agent in rats and mice, and in fact they could not find a large enough dose to kill the animals. A logical conclusion would have been that the drug has no effect. The investigators however concluded that the drug has no side effects. The question of course was what the drug could be used for. Because the structure resembled that of barbiturates, thalidomide was tried as a sleeping pill and was indeed found to be effective, eventually being sold in 46 European countries as the safest sedative.

Reassured by its safety record in animals, and because of its anti-emetic effects, pregnant women began to take thalidomide freely as a cure for morning sickness. This is when its catastrophic effects began to surface as infants were born with flipper-like limbs; hands and feet attached to the body without arms or legs, later known as thalidomiders.

RB: As a historian, I look at thalidomide in its context. The 1950’s were a time of unquestioning infatuation with science. Science and technology had defeated the fascist threat. In the cold war, science was seen as protecting our lives. The thalidomide scandal exposed us for the first time to the idea that powerful medicines can destroy lives and deform babies. Before that, medical folklore held that nothing injurious could cross the placenta.

Screenhunter_2_3As many as 20% adults taking thalidomide began to experience tingling and burning in fingers and toes with signs of nerve damage. Tragically, 40,000 individuals suffered from peripheral neuritis, and 12,000 infants were deformed by thalidomide, 5000 surviving past childhood, before the drug was finally with-drawn. It was later shown that the reason animal studies did not manifest any side effects was because thalidomide is not absorbed in rats and mice. Thanks to the heroic stance taken by Dr. Frances Kelsey at the FDA, thalidomide was never approved for use in the US, a stance for which she eventually won the President’s Award for Distinguished Federal Civilian Service in 1962.

CD: Why did you take thalidomide?

RB: I was fighting for my life, as almost everyone who comes to thalidomide is. Everything else paled beside that. In the film version of Dostoyevsky’s ”Brothers Karamazov,” Dmitri Karamazov wakes a pawnbroker, who says to him, ”It’s late.” To which Dmitri answers, ”For one who comes to a pawnbroker, it is always late.” Well, I was at the pawnbroker’s, and it was late. For five years, I had battled a mysterious, rare disease, pyoderma gangrenosum, where huge wounds on my legs kept growing larger and wouldn’t heal. I had taken, at different times, cortisone, methotrexate, cyclosporine; none worked for long. My immune system was tearing up my skin anywhere I had a wound. Thinking practically, I was planning to end my life because, if we couldn’t stop this, all my skin would be eaten away. Then my dermatologist mentioned anecdotal reports from Europe that thalidomide had been effective with pyoderma. I went to the medical library and read all I could. The rationale made sense: I had this autoimmune condition, in which one immune element, T.N.F.-alpha, was running amok in me for reasons unknown. Thalidomide represses that T.N.F.-alpha response. Fortunately, thalidomide did work for me.

In 1964, Dr. Jacob Sheskin, a Lithuanian Jew, was working with lepers in Jerusalem when he saw an extremely debilitated patient suffering from erythema nodosum leprosum (ENL) type of leprosy, who had been unable to sleep due to severe pain. Dr. Sheskin found an old bottle of thalidomide in his medicine cabinet, and gave two tablets to the patient who then slept better than he had in months. After another two tablets the following night, his lesions began to heal and it is to Dr. Sheskin’s credit that he made the association between the patient’s dramatic improvement and thalidomide. He had to contact Muckter to obtain thalidomide for a larger study. Eventually, the World Health Organization (WHO) confirmed a total remission of the disease in 99% of thousands of lepers treated in 52 countries. This is how and why, despite the sickening medical catastrophe associated with thalidomide, this drug never disappeared completely and was approved for the treatment of ENL in the USA by the FDA in 1998.

CD. A personal question. You are the son of Yul Brynner. As I was reading your book, I wondered if it was difficult to form an identity that was clearly your own.

RB. Well, I’ve had a separate identity for some time now. At one time or another I’ve written and starred in a one-man show on Broadway, earned an M.A. in philosophy and a Ph.D. in history, was bodyguard to Muhammad Ali, road manager for The Band and Bob Dylan and computer programmer for Bank of America. I’ve also written six books. My latest, about the subjective experience of time, is going out to a handful of publishers next month. These interests were all driven by my voracious curiosity more than a search for identity. Yes, it’s difficult for the children of iconic figures to establish independent identities. But with all the suffering in this world, I wouldn’t shed too many tears for those who had privileged youths. I had wonderful parents, especially through childhood. Later on, they both went a little crazy at times.

Thalidomide has now been tried in more than 130 human diseases, and at least 30 different mechanisms of action have been ascribed to the drug. Yet, the precise manner in which it exerts its anti-neoplastic effect remains unknown. In 1991, Dr. Gilla Kaplan of Rockefeller University in New York showed that TNF levels were very high in the blood and lesions of leprosy patients and that thalidomide reduced these levels by as much as 70%. In addition, Dr. Judah Folkman at Harvard Medical School showed that thalidomide can arrest the formation of new blood vessels by shutting off some necessary growth factors. The teratogenic effects on the fetus, which can occur following the ingestion of a single tablet of thalidomide at the wrong time (a 7-10 day window during the first trimester of pregnancy), turn out to be because thalidomide can stop the formation of new blood vessels or neo-angiogenesis. Finally, the drug also has a variety of effects on the immune system.

I have written previously about how cancer cells alter their microenvironment in such a way that it supports their growth at the expense of that of their normal counterparts. Such alterations may involve angiogenesis, production of TNF and abnormalities of immune regulatory cells, some of which are also the source of TNF. Thalidomide is a drug that is capable of affecting all three of these abnormalities in the malignant microenvironment, as well as having an effect on the cancer cells directly. True to form however, the introduction of thalidomide into cancer therapy did not happen as a result of logical planning, but rather dramatically as the result of one woman’s persistence. The wife of a 35 year old patient suffering from multiple myeloma, a hematologic malignancy with evidence of increased blood vessels in the bone marrow, was frantically searching for ways to save her husband. During her research, she came across Dr. Folkman who advised her to try thalidomide. She convinced her husband’s oncologists in Little Rock to do so. Although the patient himself did not benefit from the drug due to the advanced stage of his disease, several other patients treated subsequently did.

At the same time, our group had been investigating another hematologic malignancy, the pre-leukemic disorders called myelodysplastic syndromes (MDS). We had demonstrated that the primary pathology underlying the low blood counts in this disease is an excessive death of bone marrow cells caused by high TNF levels. In addition, there is also evidence of marrow neo-angiogenesis in MDS. We hypothesized that thalidomide could be a useful agent in this disease, and in 1998, treated 83 MDS patients, and showed that a subset responded, the majority of responders going from being heavily transfusion dependent to being transfusion independent.

CD: Do you think there will ever be a time when thalidomide stops being such a charged word?

RB: No. Because of its threat, everyone is working hard to keep the threat of thalidomide well known, especially Randy Warren, a Canadian thalidomide victim. He was the one who insisted that a picture of a deformed baby be on every package, that patients be obliged to watch a tape of a victim speaking and that the name never be changed or disguised with a euphemism. First and foremost, thalidomide deforms babies. Second, remarkably, it can save lives and diminish suffering. But everyone is working to eliminate thalidomide. As long as it exists, there’s a threat.

Thankfully, a safer substitute has now been developed. This drug called Revlimid, which is less toxic and more potent than the parent drug thalidomide, is proving to be highly beneficial to patients with MDS and multiple myeloma. Most importantly, there are no untoward effects on the growing embryo. In a surprising twist, MDS patients who have a specific abnormality affecting chromosome 5 appear to be specially responsive to Revlimid, and the drug has recently received FDA approval for use in this type of MDS. Maybe thalidomide can finally be retired forever. As Randy Warren said, “When that day comes, all those involved in the suffering can gather together for thalidomide’s funeral.”

Recommended reading:

  • Stephens T, Brynner R. Dark Remedy: The impact of thalidomide and its revival as a vital medicine. Perseus Publishing, Cambridge, MA. 2001.
  • Raza A et al. Thalidomide produces transfusion independence in long-standing refractory anemias of patients with myelodyplastic syndromes. Blood 98(4):958-965, 2001.
  • List AF et al: Hematologic and Cytogenetic (CTG) Response to Lenalidomide (CC-5013) in Patients with Transfusion-Dependent (TD) Myelodysplastic Syndrome (MDS) and Chromosome 5q31.1 Deletion: Results of the Multicenter MDS-003 Study. ASCO May 7, 2005
  • Raza A et al: Lenalidomide (CC-5013; RevlimidTM)-Induced Red Blood Cell (RBC) Transfusion-Independence (TI) Responses in Low-/Int-1-Risk Patients with Myelodysplastic Syndromes (MDS): Results of the Multicenter MDS 002 Study. 8th International Symposium on Myelodysplastic Syndromes May 12-15, 2005 Nagasaki, Japan.

All of my Rx columns can be seen here.

monday musing: the radetzky march

It’s been noticed by more than one person that Walter Benjamin had a melancholy streak. But Benjamin’s melancholy has often been misunderstood. as a form of nostalgia, a lament for things lost to the relentless march of history and time. It’s true, of course, that some melancholics are nostalgic. Nothing prevents the two moods from going together. But Walter Benjamin’s melancholy wasn’t that kind at all. He happened to think, surprisingly enough, that melancholy is at the service of truth.

That’s quite a claim. It sounds both grand and unapproachable. For Benjamin, though, it was almost a matter-of-fact proposition; it was so intuitive to him, it came as second nature. Benjamin thought that melancholy is at the service of truth because he thought that things, especially complicated things like periods of history and social arrangements, are hard to understand until they’ve already started to fall apart. The shorthand formula might be: truth in ruins. The type of person who sifts through ruins is the melancholic by definition. Such a person is interested in the way that meaning is revealed in decay. In a way, the Benjaminian melancholic is darker even than the nostalgist because the nostalgist wants to bring something back, whereas the melancholic is best served by the ongoing, pitiless work of death.

Durer_1Benjamin was always fond of Dürer’s engraving, Melencolia. In Melencolia, a figure sits amidst discarded and unused tools and objects of daily life. It appears that the world in which those tools made sense, the world in which they had a purpose, use, or meaning, has somehow faded away. The objects lay there without a context and the melancholic figure who gazes at them views them with an air of contemplation. The collapse of the world has become an opportunity for reflection. Truth in ruins.

It’s impossible not to think that Walter Benjamin was so fascinated by melancholy, ruins, and truth because he, himself, had come of age in a period where a world was passing away. For Central Europeans (and to a less extreme extent, the West in general), the end of the 19th century and the beginning of the 20th was the collapse of an entire world. In one of the more moving and epic sentences ever written about that collapse as it culminated in the Great War, Benjamin once penned the following: “A generation that had gone to school in horse-drawn streetcars now stood in the open air, amid a landscape in which nothing was the same except the clouds and, at its center, in a force field of destructive torrents and explosions, the tiny, fragile, human body.”

***

All of this is by way of a preface to the fact that I just finished reading Joseph Roth’s amazingly brilliant, beautiful, sad novel, The Radetzky March. The Radetzky March follows the fortunes of three generations of the Trotta family as the 19th century winds down into the seemingly inevitable, though nevertheless shocking, assassination at Sarajevo. As the critic James Wood notes in his typically powerful essay “Joseph Roth’s Empire of Signs,”

In at least half of Roth’s thirteen novels comes the inevitable, saber-like sentence, or a version of it, cutting the narrative in two: ‘One Sunday, a hot summer’s day, the Crown Prince was shot in Sarajevo’.

Roth is always writing through that moment, through the shot that cracked out on that hot summer’s day. As Wood points out, Roth became the self-appointed elegist of the empire that had come to an end at Sarajevo. The men of the Trotta family are bound to that empire in a descending line of meaninglessness and helplessness that itself tracks the dissolution and collapse of the world within which they lived. The very song, “The Radetsky March”, becomes a mournful ruin in sound. At the beginning of the novel it can still stand as a symbol for the ordering of life that holds the world of the Austro-Hungarian Empire together. By the end of the novel, it is a relic from a bygone age, and is the last thing that the youngest member of the Trotta family hears as he is gunned down ignominiously in the brutal and senseless fighting that opens the First World War.

What a profoundly and beautifully melancholic work. All the more so because it is melancholy in the service of Benjamin’s truth and not in the service of nostalgia. The Radetzky March is about how meaning operates, about how human beings come to see themselves as functioning within a world that coheres precisely as a world. In the end, Roth is essentially indifferent as to whether that world was a good or a bad one. Like all worlds, it can only cohere for so long. Instead, he focuses on laying bare its nature and its functioning in the moments where it began to break apart. Here, he is like the melancholic figure in Dürer’s engraving. The ‘tools’ of the Austro-Hungarian Empire lay around him as they’ve been discarded by history while Roth sifts through the ruins, contemplating what they were and how they worked.

That is something that Wood gets a little bit wrong, I think, in his otherwise brilliant essay. Wood takes the elegiac moments in Roth’s writing, which invariably come from the mouths of those serving the Empire, as words of longing and approval that are endorsed by Roth. But there is something more subtle and complicated going on. Wood touches on it briefly in his comment about Andreas, an organ grinder in Roth’s Rebellion. Wood writes, “It is the empire that gives him authority to exist, that tells him what to do and promises to look after him. In Roth’s novels, marching orders are more than merely figurative. They are everything.”

To put it in Kantian terms for a moment, that is exactly what Roth is doing, showing the Empire as ‘the everything’, the transcendental horizon within which human beings understand themselves and their relations to everyone else. Like Walter Benjamin, Roth has adopted this broad transcendental framework while jettisoning the strict a priori method that made Kant’s transcendental method a-historical and purportedly universal. Roth has come to see, indeed witnessed with his own eyes, that transcendental horizons of meaning are themselves historical; they fade away, they fall into ruins, and are reconstituted as something new.

It’s kind of interesting in this quasi-Kantian vein to reflect that Roth takes a marching song as his symbol for the coherence of the transcendental horizon of meaning. Kant himself started with space and time, noting rather reasonably that without space and time, you are without a framework for apprehending anything at all. Things have to be ‘in’ something, transcendentally speaking, and space and time are the broadest, most abstract categories of ‘inness’ that one is likely to find. Since Kant was after the broadest and most universally applicable set of rules that govern knowledge of the external world, it seemed a lovely place to start.

But Kant wasn’t much of a melancholic. The Sage of Königsberg thought that he could provide his set of categories for the understanding and that would be that. The content gets filled in later. History is always a posteriori, a matter of particulars. Roth’s transcendental ground, by contrast, is shot through with content and history. It’s a march, a specific song from a specific time and place. But it is no less transcendental for being so. For what is a march but a means for ordering space and time? The Radetzky March is thus more than a symbol for the ordering of the Austro-Hungarian world: it is part and parcel of that very ordering. It’s a transcendental object made palpable and tangible. And it’s one that gives up its transcendental secrets precisely as it fades into ruin. As Benjamin once wrote, “In the ruins of great buildings the idea of the plan speaks more impressively than in lesser buildings, however well preserved they are.” That’s the method of the melancholic, the historical transcendentalist. It’s fitting that it was put into practice at its highest level by a novelist chronicling the end of his world.