monday musing: minor thoughts on cicero

Cicero may very well have been the first genuine asshole. He wasn’t always appreciated as such. During more noble and naïve times, people seem to have accepted his rather moralistic tracts like ‘On Duties’ and ‘On Old Age’ as untainted wisdom handed down through the eons. This, supposedly, was a gentle man doing his best in a corrupted age. It was easier to palate that kind of interpretation during Medieval and early Renaissance times because many of his letters had been lost and forgotten. But Petrarch found some of them again around 1345 and the illusion of Cicero’s detached nobility became distinctly more difficult to pass off. Reading his letters, you can’t help but feel that Cicero really was a top-notch asshole. He schemed and plotted with the best of them. His hands were never anything but soiled.

Now, I think it may be clear that I come to praise Cicero, not to bury him. Even in calling him an asshole I’m handing him a kind of laurel. Because it is the particular and specific way that he was an asshole that picks him up out of history and plunks him down as a contemporary, as someone even more accessible after over two thousand years than many figures of the much more recent past. Perhaps this is a function of the way that history warps and folds. The period of the end of the Roman Republic in the last century BC speaks to us in ways that even more recent historical periods do not. Something about its mix of corruption and verve, cosmopolitanism and rank greed, self-destructiveness and high-minded idealism causes the whole period to leap over itself. And that is Cicero to a ‘T’. He is vain and impetuous, self-serving and conniving. He lies and cheats and he puffs himself up in tedious speech after tedious speech. It’s pretty remarkable. But he loved the Republic for what he thought it represented and he dedicated his life, literally, to upholding that idea in thought and in practice.

In what may be the meanest and most self-aggrandizing public address of all time, the Second Philippic Against Antony, Cicero finds himself (as usual) utterly blameless and finds Antony (as usual) guilty of almost every crime imaginable. It’s a hell of a speech, called a ‘Philippic’ because it was modeled after Demosthenes’ speeches against King Philip of Macedon, which were themselves no negligible feat in nasty rhetoric.

One can only imagine the electric atmosphere around Rome as Cicero spilled his vitriol. Caesar had only recently been murdered. Sedition and civil war were in the air. Antony was in the process of making a bold play for dictatorial power. Cicero, true to his lifelong inclinations, opposes Antony in the name of the restoration of the Republic and a free society. In his first Philippic, Cicero aims for a mild rebuke against Antony. Antony responds with a scathing attack. This unleashes the Second Philippic. “Unscrupulousness is not what prompts these shameless statements of yours,” he writes of Antony, “you make them because you entirely fail to grasp how you are contradicting yourself. In fact, you must be an imbecile. How could a sane person first take up arms to destroy his country, and then protest because someone else had armed himself to save it?”

Cicero’s condescension is wicked. “Concentrate, please—just for a little. Try to make your brain work for a moment as if you were sober.” Then he gets nasty. Of Antony’s past: “At first you were just a public prostitute, with a fixed price—quite a high one too. But very soon Curio intervened and took you off the streets, promoting you, you might say, to wifely status, and making a sound, steady, married woman of you. No boy bought for sensual purposes was ever so completely in his master’s powers as you were in Curio’s.”

Cicero finishes the speech off with a bit of high-minded verbal self-sacrifice:

Consider, I beg you, Marcus Antonius, do some time or other consider the republic: think of the family of which you are born, not of the men with whom you are living. Be reconciled to the republic. However, do you decide on your conduct. As to mine, I myself will declare what that shall be. I defended the republic as a young man, I will not abandon it now that I am old. I scorned the sword of Catiline, I will not quail before yours. No, I will rather cheerfully expose my own person, if the liberty of the city can be restored by my death.

May the indignation of the Roman people at last bring forth what it has been so long laboring with. In truth, if twenty years ago in this very temple I asserted that death could not come prematurely upon a man of consular rank, with how much more truth must I now say the same of an old man? To me, indeed, O conscript fathers, death is now even desirable, after all the honors which I have gained, and the deeds which I have done. I only pray for these two things: one, that dying I may leave the Roman people free. No greater boon than this can be granted me by the immortal gods. The other, that every one may meet with a fate suitable to his deserts and conduct toward the republic.

If the lines are a bit much, remember that Cicero was to be decapitated by Antony’s men not long afterward, and, for good measure, to have his tongue ripped out of his severed head by Antony’s wife, so that she might get final revenge on his powers of speech. It’s not every asshole that garners such tributes.

***

Around the time that he re-discovered some of Cicero’s letters, Petrarch started writing his own letters to his erstwhile hero. In the first, Petrarch writes,

Of Dionysius I forbear to speak; of your brother and nephew, too; of Dolabella even, if you like. At one moment you praise them all to the skies; at the next fall upon them with sudden maledictions. This, however, could perhaps be pardoned. I will pass by Julius Caesar, too, whose well-approved clemency was a harbour of refuge for the very men who were warring against him. Great Pompey, likewise, I refrain from mentioning. His affection for you was such that you could do with him what you would. But what insanity led you to hurl yourself upon Antony? Love of the republic, you would probably say. But the republic had fallen before this into irretrievable ruin, as you had yourself admitted. Still, it is possible that a lofty sense of duty, and love of liberty, constrained you to do as you did, hopeless though the effort was. That we can easily believe of so great a man. But why, then, were you so friendly with Augustus? What answer can you give to Brutus? If you accept Octavius, said he, we must conclude that you are not so anxious to be rid of all tyrants as to find a tyrant who will be well-disposed toward yourself. Now, unhappy man, you were to take the last false step, the last and most deplorable. You began to speak ill of the very friend whom you had so lauded, although he was not doing any ill to you, but merely refusing to prevent others who were. I grieve, dear friend at such fickleness. These shortcomings fill me with pity and shame. Like Brutus, I feel no confidence in the arts in which you are so proficient.

Indeed, it seems that Cicero was just a fickle man looking out for Number One, and maybe he’d stumble across a little glory in the process. Still, even that isn’t entirely fair. As Petrarch admits in his disappointed letter, some concept of the Republic and human freedom was driving Cicero all along. But the Republic was always a sullied thing, even from the beginning. The concept of freedom was always mixed up with self-interest and the less-than-pure motivations of human creatures. Cicero got himself tangled up in the compromised world of political praxis precisely because he was uninterested in a concept of freedom that hovered above the actual world with practiced distaste and a permanent scowl. I like to think of him as an asshole because I like to think of him as one of us, neck-deep in a river of shit and trying his best to find a foothold, one way or another. Dum vita est, spes est (‘While there’s life, there’s hope’).

Monday, April 17, 2006

Lunar Refractions: “Our Biggest Competitor is Silence”

I really wish I had the name of the Muzak marketer who provided this quote as it appeared in the 10 April issue of the New Yorker magazine. Silence is one of my dearest, rarest, companions, and this marketer unexpectedly emphasized its power by crediting it as the corporation’s chief competitor—no small role for such a subtle thing.

My initial, instinctual, and naturally negative reply was that, though this claim might be comforting to some, it’s also dead wrong. In most places, silence lost the battle long ago. A common strain that now unites what were once very disparate places and cultures seems to be the increasing endangerment—and in some cases extinction—of silence. I think about this a lot, especially living in a place where for much of the day loud trucks idle at length below my apartment, providing an aggravating background hum that I’ve never quite managed to relegate to the background. I lost fifteen minutes the other day fuming about the cacophonous chorus of car alarm, cement truck, and blaring car radio that overpowered any defense my thin windows might lamely try to muffle it with, not to mention the work I was trying to concentrate on. I’d buy earplugs, but noise of this caliber is also a physical, pounding presence. I admit that this sensitivity is my own to deal with, but something makes me doubt I’m alone in New York; in certain neighborhoods, and often outside of a hospital, there are several signs posted along the street, “Unnecessary Noise Prohibited.” I wonder who defines the term unnecessary, and how. Other signs warn drivers that honking the car horn in certain areas can be punished with hefty fines. A couple of years ago the same magazine cited above ran a piece—I believe it was in the Talk of the Town section—covering a local activist working to ban loud car alarms. Since silent alarms are now readily available, and have proven more effective, there really is no need for these shrill alarms. My absolute favorite ones are those set off by the noise of a passing truck, just as one apartment-dweller might crank up the volume on the stereo to drown out a neighbor’s noise. Aural inflation runs rampant.

But the comment of the Muzak marketer wasn’t enough to get me to set fingers to keyboard; what finally did it was a day-hike I took in the hills of the upper Hudson valley on Easter Sunday. I almost thought twice about escaping the city on this holiday, since—no matter how agnostic, multicultural, or 24/7 this city might be—such days always bring a rare calm. For just a few precious hours we’re spared the sound of garbage trucks carrying our trash away from us while replacing it with a different sort of pollution, and spared many other noisy byproducts of our so-called progress. As I was walking through the woods, a wind kicked up, rustling the leaves packed down by winter snow, and I was reminded of just how loud the sound of wind through bare tree branches overhead can be. Most people would probably say that wind in trees is quieter, and less disturbing, than more urban sounds, but I was reminded yesterday that that isn’t always the case.

Manetsilence_1So I set out to briefly investigate silence—why some people can’t seem to find any, why so many do everything in their power rid themselves of it, and why many just don’t seem to give it any thought, unobtrusive as it is. It has played a major role in many religions, from the tower of silence of Persian Zoroastrianism to the Trappist monks’ vows of silence; one could speculate, in a cursory way, that the rise of secular culture was accompanied by a rise in volume. I came across a curious coincidence while checking out the etchings of Manet recently that would support such a conclusion. While the painter of Olympia has often been called the least religious of painters, an etching of his done around 1860 (in the print collection of the New York Public Library) portrays a monk, tablet or book in hand and finger held to lips, with the word Silentium scrawled below. Given the connotative relationship between silence and omission, obilivion, and death, Manet’s etching has interesting implications for both silence and religion as they were seen in nineteenth-century Paris. If not secularization, perhaps industrialization ratcheted everything up a few decibels.

Silence—of both good and bad sorts—runs through everything, leaving traces throughout many languages. There are silent films, which exist only thanks to a former lack of technology, and were usually accompanied by live music. Some people’s ideal mate is a classic man of the strong, silent type—adjectives never jointly applied to a woman. A silentiary is (well, was, since I doubt many people go into such a line of work nowadays) a confidant, counselor, or official who maintains silence and order. Cones of silence appear in politics, radar technology, nineteen-fifties and sixties television shows, and science fiction novels. After twenty years of creating marvelous music out of what could be derogatively deemed noise, the band Einstürzende Neubauten came out with both a song and album titled “Silence is Sexy.” Early on the band’s drummer, Andrew Chudy, adopted the name N. U. Unruh—a wild play on words that can be connected to a German expressionist poet and playwright, a piece of timekeeping equipment, and, aptly, a riff on the theme of disquiet or unrest.

LeopardiGetting back to my stroll in the woods, when considering the peace and quiet of a holiday I inevitably turn to poet Giacomo Leopardi’s songs in verse. His thirteenth canto (“La sera del dì di festa,” “The Evening of the Holiday”), laments the sad, weighty quietness left after a highly anticipated holiday. The falling into silence of a street song at the end is a death knell for the past festivities. In keeping with this, his twenty-fifth canto (“Il sabato del villaggio,” “Saturday Night in the Village”) praises Saturday’s energetic sounds of labor in preparation for the Sunday holiday, saving only melancholy words for the day of rest itself and its accompanying quiet. I don’t wish to summarize his rich and very specific work, so encourage you to have a look at it for yourself. The fact that these were written across an ocean and over a century ago attests to the fact that silence is not golden for everyone. Were he to live today, Leopardi might well be one of the iPod-equipped masses.

When I found that Leopardi’s opinion differed from my own, I looked to another trustworthy poet for a little support in favor of my own exasperation. Rainer Maria Rilke, in his famous fifth letter to the less famous young poet, written in the autumn of 1903, is evidently dependant on silence:

“… I don’t like to write letters while I am traveling, because for letter writing I need more than the most necessary tools: some silence and solitude and a not too familiar hour…. I am still living in the city… but in a few weeks I will move into a quiet, simple room, an old summerhouse, which lies lost deep in a large park, hidden from the city, from its noises and incidents. There I will live all winter and enjoy the great silence, from which I expect the gift of happy, work-filled hours….”

SenecapergamonmuseumTo break the tie set by Leopardi and Rilke, I turned to another old friend for comfort, and was surprised to find none. Seneca, in his fifty-sixth letter to Lucilius, asserts that it is the placation of one’s passions, not external silence, that gives true quiet:

“May I die if silence is as necessary as it would seem for concentration and study. Look, I am surrounded on every side by a beastly ruckus…. ‘You’re a man of steel, or you’re deaf,’ you will tell me, ‘if you don’t go crazy among so many different, dissonant noises…’. Everything outside of me might just as well be in an uproar, as long as there is no tumult within, and as long as desire and fear, greed and luxury don’t fight amongst themselves. The idea that the entire neighborhood be silent is useless if passions quake within us.”

In this letter he lists the noises that accompany him on a daily basis: the din of passing horse-drawn carriages, port sounds, industrial sounds (albeit those of the first century), neighborhood ball players, singing barbers, numerous shouting street vendors, and even people “who like to hear their own voices as they bathe.” It sounds as though he’s writing from the average non-luxury apartment of today’s cities. His point that what’s important is interior calm, not exterior quiet, exposed my foolishness.

À propos of Seneca and serenity, a friend of mine recently bought an iPod. A year ago we had a wonderful conversation where she offered up her usual, very insightful criticisms of North American culture: “What is wrong with this country? Everyone has a f****** iPod, but so few people have health insurance! Why doesn’t anyone rebel, or even seem to care?” As I walked up to meet her a couple of weeks ago I spotted from afar the trademark white wires running to each ear. “I love this thing. I mean, sure, I don’t think at all anymore, but it’s great!” To say that this brilliant woman doesn’t think anymore is crossing the line, but it’s the perfect hyperbole that nears the truth; if you can fill your ears with constant diversion, emptying the brain is indeed easier. The question, then, is what companies like Muzak and their clients can then proceed to fill our minds with if we’re subject to their sounds.

This relates to the ancient sense of otium as well—Seneca’s idea that creativity and thought need space, room, or an empty place and time in which to truly develop. Simply defining it as leisure time or idleness neglects its constructive nature. The idea that, when left at rest, the mind finds or creates inspiration for itself, and from that develops critical thought, is key to why I take issue with all this constructed, mass-marketed sound and “audio architecture.” While it might seem that an atmosphere filled with different stimuli and sounds would spark greater movement, both mental and physical, I think we’ve reached the point where that seeming activity is just that—an appearance, and one that sometimes hides a great void.

AlicecooperIn closing, for those interested, we may finally be able to give credit to the Muzak marketer who inspired me. On Tuesday, 18 April, John Schaefer will discuss Muzak on WNYC’s Soundcheck. In the meantime, I’ll leave you with a gem from the September 1969 issue of Poppin magazine. In music critic Mike Quigley’s interview with Alice Cooper, the latter discussed what he’s looking for between himself and the audience: “If it’s total freedom, I guess the ultimate thing you can go into is total silence between the audience and performer, with the performer projecting something he doesn’t even have to play. A total silence trip is the ultimate.” Even Muzak can’t counter that.

Selected Minor Works: Of the Proper Names of Peoples, Places, Fishes, &c.

Justin E. H. Smith

When I was an undergraduate in the early 1990s, an outraged student activist of Chinese descent announced to a reporter for the campus newspaper: “Look at me! Do I look ‘Oriental’? Do you see anything ‘Oriental’ about me?  No. I’m Asian.”  The problem, however, is that he didn’t look particularly ‘Asian’ either, in the sense that there is nothing about the sound one makes in uttering that word that would have some natural correspondence to the lad’s physiognomy.  Now I’m happy to call anyone whatever they want to be called, even if personally I prefer the suggestion of sunrises and sunsets in “Orient” and “Occident” to the arbitrary extension of an ancient (and Occidental) term for Anatolia all the way to the Sea of Japan.  But let us be honest: the 1990s were a dark period in the West to the extent that many who lived then were content to displace the blame for xenophobia from the beliefs of the xenophobes to the words the xenophobes happened to use.  Even Stalin saw that to purge bourgeois-sounding terms from Soviet language would be as wasteful as blowing up the railroad system built under the Tsar.

In some cases, of course, even an arbitrary sound may take on grim connotations in the course of history, and it can be a liberating thing to cast an old name off and start afresh.  I am certainly as happy as anyone to see former Dzerzhinsky Streets changed into Avenues of Liberty or Promenades of Multiparty Elections.  The project of pereimenovanie, or re-naming, was as important a cathartic in the collapsed Soviet Union as perestroika, or rebuilding, had been a few years earlier.  If the darkest period of political correctness is behind us, though, this is in part because most of us have realized that name-changes alone will not cut it, and that a real concern for social justice and equality that leaves the old bad names intact is preferable to a cosmetic alteration of language that allows entrenched injustice to go on as before– pereimenovanie without perestroika

But evidently the PC coffin could use a few more nails yet, for the naive theory of language that guided the demands of its vanguard continues to inform popular reasoning as to how we ought to go about calling things.  Often, it manifests itself in what might be called pereimenovanie from the outside, which turns Moslems into Muslims, Farsi into Persian, and Bombay into Mumbai, as a result of the mistaken belief on the part of the outsiders that they are thereby, somehow, getting it right.  This phenomenon, I want to say, involves not just misplaced moral sensitivity, but also a fundamental misunderstanding of how peoples and places come by their names. 

Let me pursue these and a few other examples in detail.  These days, you’ll be out on your ear at a conference of Western Sinologists if you say “Peking” instead of “Beijing.”  Yet every time I hear a Chinese person say the name of China’s capital city, to my ear it comes out sounding perfectly intermediate between these two.  Westerners have been struggling for centuries to come up with an adequate system of transliteration for Chinese, but there simply is no wholly verisimilar way to capture Chinese phonology in the Latin alphabet, an alphabet that was not devised with Chinese in mind, indeed that had no inkling of the work it would someday be asked to do all around the world.  As Atatürk showed with his Latinization of Turkish, and Stalin with his failed scheme for the Cyrillicization of the Baltic languages, alphabets are political as hell. But decrees from the US Library of Congress concerning transliteration of foreign alphabets are not of the same caliber as the forced adoption of the Latin or Cyrillic scripts.  Standardization of transliteration has more to do with practical questions of footnoting and cataloguing than with the politics of identity and recognition.

Another example.  In Arabic, the vowel between the “m” and the “s” in the word describing an adherent of Islam is a damma.  According to Al-Ani and Shammas’s Arabic Phonology and Script (Iman Publishing, 1999), the damma is “[a] high back rounded short vowel which is similar to the English “o” in the words “to” and “do”.” So then, “Moslem” or “Muslim”?  It seems Arabic itself gives us no answer to this question, and indeed the most authentic way to capture the spirit of the original would probably be to leave the vowel out altogether, since it is short and therefore, as is the convention of Arabic othography, unwritten.   

And another example.  Russians refer to Russia in two different ways: on the one hand, it is Rus’, which has the connotation of deep rootedness in history, glagolithic tablets and the like, and is often modified by the adjective “old”; on the other hand it is Rossiia, which has the connotation of empire and expanse, engulfing the hunter-gatherers of Kamchatka along with the Slavs at the empire’s core.  Greater Russia, as Solzhenitsyn never tires of telling us, consists in Russia proper, as well as Ukraine (the home of the original “Kievan Rus'”), and that now-independent country whose capital is Minsk.  Minsk’s dominion is called in German “Weissrussland,” and in Russian “Belorussiia.”  In other words, whether it is called “Belarus” or “Belorussia” what is meant is “White Russia,” taxonomically speaking a species of the genus “Russia.”  (Wikipedia tells us that the “-rus” in “Belarus” comes from “Ruthenia,” but what this leaves out is that “Ruth-” itself is a variation on “Rus’,” which, again is one of the names for Muscovite Russia as well as the local name for White Russia.) 

During the Soviet period, Americans happily called the place “Belorussia,” yet in the past fifteen years or so, the local variant, “Belarus,” has become de rigueur for anyone who might pretend to know about the region.  Of course, it is admirable to respect local naming practices, and symbolically preferring “Belarus” over “Belorussia” may seem a good way to show one’s pleasure at the nation’s newfound independence from Soviet domination. 

However (and here, mutatis mutandis, the same point goes for Mumbai), I have heard both Americans and Belarusans say the word “Belarus,” and I daresay that when Americans pronounce it, they are not saying the same word as the natives.  Rather, they are speaking English, just as they were when they used to say “Belorussia.”  Moreover, there are plenty of perfectly innocuous cases of inaccurate naming.  No one has demanded (not yet, anyway) that we start calling Egypt “Misr,” or Greece “Hellas.”  Yet this is what we would be obligated to do if we were to consistently employ the same logic that forces us to say “Belarus.”  Indeed, even the word we use to refer to the Germans is a borrowing from a former imperial occupier –namely, the Romans– and has nothing to do with the German’s own description of themselves as Deutsche.

In some cases, such as the recent demand that one say “Persian” instead of “Farsi,” we see an opposing tendency: rather than saying the word in some approximation of the local form, we are expected to say it in a wholly Anglicized way.  I have seen reasoned arguments from (polyglot and Western-educated) natives for the correctness and sensitivity of “Mumbai,” “Persian,” “Belarus,” and “Muslim,” but these all have struck me as rather ad hoc, and, as I’ve said, the reasoning for “Persian” was just the reverse of the reasoning for “Mumbai.”  In any case, monolingual Persian speakers and residents of Mumbai themselves could not care less. 

Perhaps the oddest example of false sensitivity of this sort comes not in connection with any modern ethnic group, but with a race of hominids that inhabited Europe prior to the arrival of the homo sapiens and were wiped out by the newcomers about 29,000 years ago.  In the 17th century, one Joachim Neumann adopted the Hellenized form of his last name, “Neander,” and proceeded to die in a valley that subsequently bore his name: the Neanderthal, or “the valley of the new man.”  A new man, of sorts, was found in that very valley two centuries later, to wit, the Homo neanderthalensis.

Now, as it so happens, “Thal” is the archaic version of the German word “Tal.”  Up until the very recent spelling reforms imposed at the federal level in Germany, vestigial “h”s from earlier days were tolerated in words, such as “Neanderthal,” that had an established record of use.  If the Schreibreform had been slightly more severe, we would have been forced to start writing “Göte” instead of the more familiar “Goethe.”  But Johann Wolfgang was a property the Bundesrepublik knew it dare not touch. The “h” in “Neanderthal” was however axed, but the spelling reform was conducted precisely to make German writing match up with German speech: there never was a “th” sound in German, as there is in English, and so the change from “Thal” to “Tal” makes no phonetic difference. 

We have many proper names in North America that retain the archaic spelling “Thal”, such as “Morgenthal” (valley of the morning), “Rosenthal” (valley of the roses), etc., and we happily pronounce the “th” in these words as we do our own English “thaw.”  Yet, somehow over the past ten years or so Americans have got it into their heads that they absolutely must say Neander-TAL, sans voiceless interdental fricative, as though this new standard of correctness had anything to do with knowledge of prehistoric European hominids, as though the Neanderthals themselves had a vested interest in the matter.  I’ve even been reproached myself, by a haughty, know-it-all twelve-year-old, no less, for refusing to drop the “th”. 

The Neanderthals, I should not have to point out, were illiterate, and the presence or absence of an “h” in the word for “valley” in a language that would not exist until several thousand years after their extinction was a matter of utter indifference to them.  Yet doesn’t the case of the Neanderthal serve as a vivid reductio ad absurdum of the naive belief that we can set things right with the Other if only we can get the name for them, in our own language, right?  The names foreigners use for any group of people (or prehuman hominids, for that matter) can only ever be a matter of indifference for that group itself, and it is nothing less than magical thinking to believe that if we just get the name right we can somehow tap into that group’s essence and refer to them not by some arbitrary string of phonemes, but as they really are in their deepest and truest essence. 

This magical thinking informs the scriptural tradition of thinking about animals, according to which the prelapsarian Adam named all the different biological kinds not with arbitrary sounds, but in keeping with their true natures.  Hence, the task of many European naturalists prior to the 18th century was to rediscover this uncorrupted knowledge of nature by recovering the lost language of Adam, and thus, oddly enough, zoology and Semitic philology cosnstituted two different domains of the same general project of inquiry. 

Some very insightful thinkers, such as Gottfried Leibniz, noticed that ancient Hebrew too, just like modern German, is riddled with corrupt verb forms and senseless exceptions to rules, and sharply inferred from this that Hebrew was no more divine than any vulgate.  Every vocabulary human beings have ever come up with to refer to the world around them has been nothing more than an arbitrary, exception-ridden, haphazard set of sounds, and in any case the way meanings are produced seems to have much more to do with syntax –the rules governing the order in which the sounds are put together– than with semantics– the correspondence between the sounds and the things in the world they are supposed to pick out. 

This hypercorrectness, then, is ultimately not just political, but metaphysical as well.  It betrays a belief in essences, and in the power of language to pick these out.  As John Dupré has compellingly argued, science educators often end up defending a supercilious sort of taxonomical correctness when they declaim that whales are not fish, in spite of the centuries of usage of the word “fish” to refer, among other things, to milk-producing fish such as whales.  The next thing you know, smart-ass 12-year-olds are lecturing their parents about the ignorance of those who think whales are fish, and another generation of blunt-minded realists begins its takeover.  Such realism betrays too much faith in the ability of authorities –whether marine biologists, or the oddly prissy postmodern language police in the English departments– to pick out essences by their true names.  It is doubtful that this faith ever did much to protect anyone’s feelings, while it is certain that it has done much to weaken our descriptive powers, and to take the joy out of language. 

Negotiations 7: Channeling Britney

(Note: Jane Renaud wrote a great piece on this subject last week. I hope the following can add to the conversation she initiated.)

When I first heard of Daniel Edwards’ Britney sculpture (Monument to Pro-Life), I was fascinated. What a rich stew: a pop star whose stock-in-trade has been to play the innocent/slut (with rather more emphasis on the latter) gets sculpted by a male artist as a pro-life icon and displayed in a Williamsburg gallery! Gimmicky, to be sure; nonetheless, the overlapping currents of Sensationalism, Irony and Politics were irresistible, so I took myself out to the Capla Kesting Fine Art Gallery on Thursday to have a look.

I am not a fan of pop culture. My attitude toward it might best be characterized a Swiss. In conversation, I tend to sniff at it. “Well,” I have been known to say, “it may be popular, but it’s not culture.” I do admit to a lingering fondness for Britney, but that has lees to do with her abilities as chanteuse than it does with the fact that, as a sixteen-year-old boy, I moved from the WASPy northeast to Nashville, Tennessee and found myself studying in a seraglio of golden-haired, pig-tailed, Catholic schoolgirls, each one of them a replica of early Britney and each one of them, like her, as common and as unattainable as a species of bird. What can I say? I was sixteen. Despise the sin, not the sinner.

I was curious to know the extent to which this sculpture would be a monument to pop culture—did the artist, Daniel Edwards, fancy himself the next Jeff Koons?—and surprised to discover that, having satisfied my puerile urges (a surreptitious glance at the breasts, a disguised study of the money shot), my experience of the piece was in no way mediated by my awareness that its model was a pop star. “Britney Spears” is not present in the piece, and its precursor is not Koons’ Michael Jackson and Bubbles or Warhol’s silk-screens of Marilyn Monroe. One has to go much further back than that. Its precursor is actually Michelangelo’s Pietá.

In both cases, the spectacular back story (Mary with dead Christ on her lap, Britney with Sean’s head in her cooch) is overwhelmed by the temporal event that grounds it; so that the Pietá is nothing more (nor less) than Mother and Dead Son, and Monument to Pro-Life becomes simply Woman Giving Birth. Where Koons and Warhol empty the role of the artist as creative genius and replace it with artist as mirror to consumer society, Edwards (and Michelangelo well before him) empty the divine (the divinity of Christ, the divinity of the star) and replace it with the human. Edwards, then, is doing something very tricky here, and if one can stomach the nausea-inducing gimmickry of the work, there’s a lot worth considering.

First of all is the composition of the work. The subject is on all fours, in a position that, as Jane Renaud wryly observed in these pages last week, might be more appropriate for getting pregnant than for giving birth. She is on a bear-skin rug; her eyes are heavily lidded, her lips slightly parted, as though she might be about to moan or to sing. And yet the sculpture is in no way pornographic or even titillating. There is nothing on her face to suggest either pain or ecstasy. The person seems to be elsewhere, even if her body is present, and the agony we associate with childbirth is elsewhere. In fact, with her fingers laid gently into the ears of the bear, not clutching or tearing at them, she seems to be channeling all her emotions into its head. Its eyes are wide open, its mouth agape and roaring. The subject is emptying herself, channeling at both ends, serenely so, a Buddha giving birth, without tension at the front end and without blood or tearing at the rear. The child’s head emerges as cleanly, and as improbably, as a perfect sphere from a perfect diamond. This is a revolution in birthing. Is that the reward for being pro-life? Which brings us to the conceptual component of Monument to Pro-Life.

To one side of the sculpture stands a display of pro-life literature. You cannot touch it; you cannot pick it up; you cannot read it even if you wanted to because it is in a case, under glass. This is not, I think, because there is not enough pro-life literature to go around, and it hints at the possibility that the artist is being deliberately disingenuous, that he is commenting both on the pro-life movement and on its monumental aspirations. The sculpture is out there in the air, naked and exposed, while the precious literature is encased and protected. Shouldn’t it be the other way around? It’s almost as if the artist is saying, “This is the pro-life movement’s relationship to women: It is self-interested and self-preserving; and in its glassed-in, easy righteousness it turns them into nothing more than vessels, emptying machines. It prefers monuments to mothers, literature to life.”

Now lest you think that I am calling Daniel Edwards the next Michelangelo, let me assure you that I most definitely am not. As conceptually compelling as I found Monument to Pro-Life to be, I also found it aesthetically repugnant. Opinions are like assholes—everybody has one—but this sculpture is hideous to look at. It’s made of fiberglass, for god’s sake, which gives it a reddish, resiny cast, as though the subject had been poached, and a texture which made me feel, just by looking at it, that I had splinters under my fingernails. I know we all live in a post-Danto age of art criticism, that ideas are everything now, and that the only criterion for judging a work of art is its success in embodying its own ideas; but as I left the gallery I couldn’t help thinking of Plato and Diogenes. When Plato defined man as a “featherless biped,” the Cynic philosopher is said to have flung a plucked chicken into the classroom, crying “Here is Plato’s man.” Well, here is Danto’s art. With a price tag of $70,000, which it will surely fetch, he can have it.

Monday Musing: The Palm Pilot and the Human Brain, Part II

Part II: How Brains Might Work

Chess7Two weeks ago I wrote the first part of this column in which I made an attempt to explain how it is that we are able to design very complex machines like computers: we do it by employing a hierarchy of concepts, each layer of which builds upon the layer below it, ultimately allowing computers to perform seemingly miraculous tasks like beating Gary Kasparov at chess at the highest levels of the hierarchy, while all the way down at the lowest layers, the only thing going on is that some electrons are moving about on a tiny wafer of silicon according to simple physical rules. [Photo shows Kasparov in Game 2 of the match.] I also tried to explain what gives computers their programmable flexibility. (Did you know, for example, that Deep Blue, the computer which drove Kasparov to hair-pulling frustration and humiliation in chess, now takes reservations for United Airlines?)

But while there is a difference between understanding something that we ourselves have built (we know what the conceptual layers are because we designed them, one at a time, after all) and trying to understand something like the human brain, designed not by humans but by natural selection, there is also a similarity: brains also do seemingly miraculous things, like the writing of symphonies and sonnets, at the highest levels, while near the bottom we just have a bunch of neurons connected together, digitally firing (action potentials) away, again, according to fairly simple physical rules. (Neuron firings are digital because they either fire or they don’t–like a 0 or a 1–there is no such thing as half of a firing or a quarter of one.) And like computers, brains are also very flexible at the highest levels: though they were not designed by natural selection specifically to do so, they can learn to do long-division, drive cars, read the National Enquirer, write cookbooks, and even build and operate computers, in addition to a million other things. They can even turn “you” off, as if you were a battery operated toy, if they feel they are not getting enough oxygen, thereby making you collapse to the ground so that gravity can help feed them more of the oxygen-rich blood that they crave (you know this well, if you have ever fainted).

Jeff_hawkinsTo understand how brains do all this, this time we must attempt to impose a conceptual framework on them from the outside, as it were; a kind of reverse-engineering. This is what neuroscience attempts to do, and as I promised last time, today I would like to present a recent and interesting attempt to construct just such a scaffolding of theory on which we might stand while trying to peer inside the brain. This particular model of how the brain works is due to Jeff Hawkins, the inventor of the Palm Pilot and the Treo Smartphone, and a well-respected neuroscientist. It was presented by him in detail in his excellent book On Intelligence, which I highly recommend. What follows here is really just a very simplified account of the book.

Let’s jump right into it then: Hawkins calls his model the “Memory-Prediction” framework, and its core idea is summed up by him in the following four sentences:

The brain uses vast amounts of memory to create a model of the world. Everything you know and have learned is stored in this model. The brain uses this memory-based model to make continuous predictions of future events. It is the ability to make predictions about the future that is the crux of intelligence. (On Intelligence, p. 6)

Hawkins focuses mainly on the neocortex, which is the part of the brain responsible for most higher level functions such as vision, hearing, mathematics, music, and language. The neocortex is so densely packed with neurons, that no one is exactly sure how many there are, though some neuroscientists estimate the number at about thirty billion. What is astonishing is to realize that:

Those thirty billions cells are you. They contain almost all your memories, knowledge, skills, and accumulated life experience… The warmth of a summer day and the dreams we have for a better world are somehow the creation of these cells… There is nothing else, no magic, no special sauce, only neurons and a dance of information… We need to understand what these thirty billion cells do and how they do it. Fortunately, the cortex is not just an amorphous blob of cells. We can take a deeper look at its structure for ideas about how it gives rise to the human mind. (Ibid., p. 43)

The neocortex is a thin sheet consisting of six layers which envelops the rest of the brain and is folded up in a crumpled way. This is what gives the brain its walnutty appearance. (If completely unfolded, it would be quite thin–only a couple of millimeters–and would cover an area about the size of a large dinner napkin.) Now, while the neocortex looks pretty much the same everywhere with its six layers, different regions of it are functionally specialized. For example, the Broca’s area handles the rules of linguistic grammar. Other areas of the neocortex have also been mapped out functionally in quite some detail by techniques such as looking at brains with localized damage (due to stroke or injury) and seeing what functions are lost in the patient. (Antonio Damasio presents many fascinating cases in his groundbreaking book Descartes’ Error.) But while everyone else was looking for differences in the various functional areas of the cortex, a very interesting observation was made by a neurophysiologist named Vernon Mountcastle (I was fortunate enough to attend a brilliant series of lectures by him on basic physiology while I was an undergraduate!) at Johns Hopkins University in 1978: he noticed that all the different regions of the neocortex look pretty much exactly the same, and have the same structure, whether they process language or handle touch. And he proposed that since they have the same structure, maybe they are all performing the same basic operation, and that maybe the neocortex uses the same computational tool to do everything. Mountcastle suggested that the only difference in the various areas are how they are connected to each other and to other parts of the nervous system. Now Hawkins says:

Scientists and engineers have for the most part been ignorant of, or have chosen to ignore, Mountcastle’s proposal. When they try to understand vision or make a computer that can “see,” they devise vocabulary and techniques specific to vision. They talk about edges, textures, and three-dimensional representations. If they want to understand spoken language, they build algorithms based on rules of grammar, syntax, and semantics. But if Mountcastle is correct, these approaches are not how the brain solves these problems, and are therefore likely to fail. If Mountcastle is correct, the algorithm of the cortex must be expressed independently of any particular function or sense. The brain uses the same process to see as to hear. The cortex does something universal that can be applied to any type of sensory or motor system. (Ibid., p. 51)

The rest of Hawkins’s project now becomes laying out in detail what this universal algorithm of the cortex is, how it functions in different functional areas, and how the brain implements it. First he tells us that the inputs to various areas of the brain are essentially similar and consist basically of spatial and temporal patterns. For example, the visual cortex receives a bundle of inputs from the optic nerve, which is connected to the retina in your eye. These inputs in raw form represent the image that is being projected onto the retina in terms of a spatial pattern of light frequencies and amplitudes, and how this image (pattern) is changing over time. Similarly the auditory nerves carry input from the ear in terms of a spatial pattern of sound frequencies and amplitudes which also varies with time, to the auditory areas of the cortex. The main point is that in the brain, input from different senses is treated the same way: as a spatio-temporal pattern. And it is upon these patterns that the cortical algorithm goes to work. This is why spoken and written language are perceived in a remarkably similar way, even though they are presented to us completely differently in simple sensory terms. (You almost hear the words “simple sensory terms” as you read them, don’t you?)

Now we get to one of Hawkins’s key ideas: unlike a computer (whether sequential or parallel), the brain does not compute solutions to problems; it retrieves them from memory: “The entire cortex is a memory system. It isn’t a computer at all.” (Ibid., p. 68) To illustrate what he means by this, Hawkins provides an example: imagine, he says, catching a ball thrown at you. If a computer were to try to do this, it would attempt to estimate its initial trajectory and speed and then use some equations to calculate its path, how long it will take to reach you, etc. This is not anything like what your brain does. So how does your brain do it?

When a ball is thrown, three things happen. First, the appropriate memory is automatically recalled by the sight of the ball. Second, the memory actually recalls a temporal sequence of muscle commands. And third, the retrieved memory is adjusted as it is recalled to accomodate the particulars of the moment, such as the ball’s actual path and the position of your body. The memory of how to catch a ball was not programmed into your brain; it was learned over years of repetitive practice, and it is stored, not calculated, in your neurons. (Ibid., p. 69)

At first blush it may seem that Hawkins is getting away with some kind of sleight of hand here. What does he mean that the memories are just retrieved and adjusted for the particulars of the situation? Wouldn’t that mean that you would need millions of memories for every single scenario like catching a ball, because every situation of ball-catching can vary from another in a million little ways? Well, no. Hawkins now introduces a way of getting around this problem, and it is called invariant representation, which we will get to soon. Cortical memories are different from computer memory in four ways, Hawkins tells us:

  1. The neocortex stores sequences of patterns.
  2. The neocortex recalls patterns auto-associatively.
  3. The neocortex stores patterns in an invariant form.
  4. The neocortex stores patterns in a hierarchy.

Let’s go through these one at a time. The first feature is why when you are telling a story about something that happened to you, you must go in sequence (and why often people include boring details in their stories!) or you may not remember what happened; like only being able to remember a song if you sing it to yourself in sequence, one note at a time. (You couldn’t recite the notes backward–or even the alphabet backward very fast–while a computer could.) Even very low-level sensory memories work this way: the feel of velvet as you run your hand over it is just the pattern of very quick sequential nerve firings that occurs as your fingers run over the fibers. This pattern is a different sequence in case you are running your hand over gravel, say, and that is how you recognize it. Computers can be made to store memories sequentially, such as a song, but they do not do this automatically, the way the cortex does.

Auto-associativity is the second feature of cortical memory and what it means is that patterns are associated with themselves. This makes it possible to retrieve a whole pattern when only a part of it is presented to the system.

…imagine you see a person waiting for a bus but can only see part of her because she is standing partially behind a bush. Your brain is not confused. Your eyes only see parts of a body, but your brain fills in the rest, creating a perception of a whole person that’s so strong you may not even realize you’re only inferring. (Ibid., p. 74)

Temporal patterns are also similarly retrieved and completed. In a noisy environment we often don’t hear every single word that someone is saying to us, but our brain fills in with what it expects to have heard. (If Robin calls me on Sunday night on his terrible cell phone and says, “Did you …crackle-pop… your Monday column yet?” My brain will automatically fill in the word “write.”) Sequences of memory patterns recalled auto-associatively essentially constitute thought.

Now we get to invariant representations, the third feature of cortical memory. Notice that while computer memories are designed for 100% fidelity (every bit of every byte is reproduced flawlessly), our brains do not store information this way. Instead, they abstract out important relationships in the world and store those, leaving out most of the details. Imagine talking to a friend who is sitting right in front of you. As you talk to her, the exact pattern of pixels coming over the optic nerve from your retina to your visual cortex is never the same from one moment to another. In fact, if you sat there for hours, no pattern would ever repeat because both of you are moving slightly, the light is changing, etc. Nevertheless you have a continuous sense of your friend’s face being in front of you. How does that happen? Because your brain’s internal pattern of representation of your friend’s face does not change, even though the raw sensory information coming in over the optic nerve is always changing. That’s invariant representation. And it is implemented in the brain using a hierarchy of processing. Just to give a taste of what that means, every time your friend’s face or your eyes move, a new pattern comes over the optic nerve. In the visual input area of your cortex, called V1, the pattern of activity is also different each time anything in your visual field moves, but several levels up in the hierarchy of the visual system, in your facial recognition area, there are neurons which remain active as long as your friend’s face is in your visual field, at any angle, in any light, and no matter what makeup she’s wearing. And this type of invariant representation is not limited to the visual system but is a property of every sensory and cortical system. So how is this invariant representation accomplished?

———————–

I’m sorry, but unfortunately, I have once again run out of time and space and must continue this column next time. Despite my attempts at presenting Hawkins’s theory as concisely as possible, it is not possible to condense it further without losing essential parts of it and there’s still quite a bit left, and so I must (reluctantly) write a Part III to this column in which I will present Hawkins’s account of how invariant representations are implemented, how memories are used to make predictions (the essence of intelligence), and how all this is implemented in hierarchical layers in the actual cortex of the brain. Look for it on May 8th. Happy Monday, and have a good week!

NOTE: Part III is here. My other Monday Musing columns can be found here.

Monday, April 10, 2006

Old Bev: POP! Culture

53063_2The cover of this week’s STAR Magazine features photos of Katie Holmes, Gwyneth Paltrow, Brooke Shields, Angelina Jolie, and Gwen Stefani (all heavily pregnant) and the yellow headline “Ready to POP!”  Each pregnancy, according to Star, is in some way catastrophic – Katie’s dreading her silent Scientology birth, Gwyneth drank a beer the other night, Brooke fears suffering a second bout of depression, Angelina’s daring to dump her partner, and Gwen’s thinking of leaving show business.  They seem infected, confused, in danger of combustion.  “I can’t believe they’re all pregnant all at the same time!” exclaimed the cashier at Walgreen’s as she rung up my purchases, as if these women were actually in the same family, or linked by something other than fame and success.  The cover of Star suggests that these ladies have literally swollen too big for their own good.

Edwards_1Britney Spears’ pregnancy last summer kicked off this particular craze of the celebrity glossy.  Each move she made, potato chip she ate, insult tossed toward Kevin, all of it was front page pregnancy news for Star and its competitors.  “TWINS?!” screamed one cover, referencing her ballooning weight. It was coverage like this that inspired Daniel Edwards’ latest sculpture, “Monument to Pro-Life: The Birth of Sean Preston,” though from his perspective the media’s take on the pregnancy was unilaterally positive.  When asked why it was Britney Spears whom he chose to depict giving birth naked and on all fours on a bear skin rug, he replied, “It had to be Britney.  She was the one.  I’d never seen such a celebrated pregnancy…and I wanted to explore why the public was so interested.”

Predictably, the sculpture has attracted a fair amount of coverage in the last few weeks, most of it in the “news of the weird” category. The owners of the Capla Kesting Fine Art Gallery have made much of the title of the piece, taking the opportunity to include in the exhibit a collection of Pro-Life materials, announcing plans for tight security at the opening,  and publicizing their goal of finding an appropriate permanent display for the work by Mother’s Day.  Edwards states that he’s undecided on the abortion issue, Britney has yet to comment on the work, and the Pro-Lifers aren’t exactly welcoming the statue into their canon.  For all of the media flap, I was expecting more of a crowd at Friday’s opening (we numbered only about 30 when the exhibit opened), and a much less compelling sculpture.

Front_3My initial reaction to photos of “Monument to Pro-Life” was that Britney’s in a position that most would sooner associate with getting pregnant than with giving birth.  Edwards, I thought, was invoking the pro-life movement as a way to protest the divorce of the sex act from reproduction. But in person, in three dimensions and life-size, the sculpture demands that the trite interpretations be dropped.  It’s a curious and exploratory work, and I urge you to go and see it if you can, rather than depend on the photos.  Unlike the pregnant women of STAR, the woman in “Monument to Pro-Life” isn’t in crisis.  She easily dominated the Capla-Kesting gallery (really a garage), and made silly the hoaky blue “It’s a Boy!” balloons hovering around the ceiling.  To photograph the case of pro-life materials in the corner I had to ask about five people to move – they were standing with their backs to it, staring at the sculpture.  The case’s connection to the work was flimsy, sloppy, more meaningful in print than in person.

Yes, Edwards called the piece “Monument to Pro-Life: The Birth of Sean Preston,” but I think the title aims less to signal a political allegiance than to explore the rhetoric of the abortion debate.  Birth isn’t among the usual images associated with the pro-life movement. Teeny babies, smiling children, bloody fetuses are usual, but I’ve never seen a birth depicted on the side of a van.  Pro-life propaganda is meant to emphasize the life in jeopardy – put a smiling toddler on a pro-life poster, and you’re saying to the viewer, you would kill this girl?  The bloody fetus screams, you killed this girl.  The images are meant to locate personal responsibility in the viewer.  But a birth image involves a mother, allows a displacement of that responsibility.  A birth image invokes contexts outside of the viewer’s frame of reference (but maybe she was raped! Maybe she already has four kids and no job!  Maybe she’s thirteen!), and forces the viewer to pass judgment on the mother in question.  Not all pro-lifers, not by any means, wish to punish or humiliate those women who abort their pregnancies. The preemies and toddlers and fetuses serve to inspire a protection impulse, and the more isolated those figures are from their mothers (who demand protection), the simpler the argument. Standard pro-life propaganda avoids birth images in order to isolate that protective impulse, and narrow the guilt.

Of course, the mother in this birth image has a prescribed context.  Britney Spears, according to Edwards, has made the unusual and brave choice to start a family at the height of her career, at the young age of 24.  For him, the recontextualization of “Pro-Life” seems to be not just about childbirth, but about childbirth’s relationship to ‘anti-family’ concepts of female career.  Edwards celebrates the birth of Sean Preston because of when Sean Preston was born, and to whom.  Unlike STAR, which depicts the pregnancies of successful women as dangerous grabs for more, Edwards depicts Britney’s pregnancy as a venerable retreat back to womanhood.  The image/argument would be more convincing, however, if the sculpture looked more like Britney, and if Britney was a better representative of the 24-year-old career woman. It doesn’t (the photos don’t conceal an in-person resemblance), and she isn’t (already the woman has released a greatest hits album).  Edwards would have been better served had Capla Kesting displayed a case of Britney iconography along side the statue if he wished his audience to contemplate her decision.  But the sculpture is perfectly compelling even outside of the Britney context.

BackStandard pro-life rhetoric is preoccupied by transition, the magic moment of conception when ‘life begins.’  Edwards too focuses on transition, but at the other end of the pregnancy.  Sean Preston, qualified as male only by the title, is frozen just as he crowns.  He has yet to open his eyes to the world, but the viewer, unlike his mother, can see him. Many midwives and caregivers discourage childbirth in this position (hands and knees) because, though it is easy on the mother’s back and protects against perineal tearing, it is difficult to anticipate the baby’s arrival.  It’s a method of delivery that a mother should not attempt alone. The viewer of “Monument to Pro-Life” is necessarily implicated in the birth, assigned responsibility for the safe delivery of Sean Preston.

You’ve got to be up close to see this, though.  As I left the gallery, walked up North 5th to Roebling, a 60-something woman in a chic black coat stopped me.  “Who’s the artist?” she asked.  “Who is it that’s getting all the attention?”  I told her it was Daniel Edwards, but that the news trucks were there because it was a sculpture of Britney Spears giving birth on all fours.  Her eyebrows raised.  “You know, I thought it was very pornographic,” she offered, and I glanced back at Capla Kesting.  And from across the street, it did look like a sex show.

It’s a tricky game Daniel Edwards is playing.  On the one hand, “Monument to Pro-Life” is a fairly complicated (and exploitive) work; on the other, it’s a fairly boring (and exploitive) conduit of interest cultivated by STAR and the pro-life movement.  Unfortunately for Edwards, the media machine that inspired his work doesn’t quite convey it in full – the AP photograph of the sculpture doesn’t show her raised hips, and forget about Sean Preston crowning. However, the STAR website does have a mention of the sculpture, and a poll beneath the article for readers to express their opinions.  The questions: “Is it a smart thing for pregnant-again Britney Spears, who gave birth to son Sean Preston just 6 months ago, to have another child so soon after giving birth?” and “Can Britney make a successful comeback as a singer?”

Philip Larkin: Hull-Haven

Australian poet and author Peter Nicholson writes 3QD’s Poetry and Culture column (see other columns here). There is an introduction to his work at peternicholson.com.au and at the NLA.

Philiplarkin200x280_1For Gerard Manley Hopkins there was Heaven-haven, when a nun takes the veil, and perhaps a poet-priest seeks refuge, but for Philip Larkin there is no heaven. There is Hull, and that is where Larkin, largely free of metropolitan London’s seductions, finds his poetry and his poetics. Old chum Kingsley, it seems, can do his living for him there. But Larkin has more than two strings to his bow too, which awkward last meetings around the death bed show only too plainly.

Now that the usual attempts at deconstruction have almost run their course, the time has come to look at the work left. Pulling people off their plinth is a lifetime task for some who never get around to understanding that some writers say more, and more memorably, than they can ever do. Also, they don’t seem to understand that writers are just like everyone else, only with the inexplicable gift, which the said writer understands least of all, knowing that the gift, bestowed by the Muse, can depart in high dudgeon without notice. Larkin knew this, and lamented the silences of his later years.

Silence does seem to wait through his poems. They bleakly open to morning light, discover the world’s apparent heartlessness, then close with a dying fall. Occasionally ‘long lion days’ blaze, but the usual note is meditative, and sometimes grubby. What mum and dad do to you has to be lived out in extenso. Diary entries are too terrible to be seen and must be shredded. Bonfires and shreddings have a noble tradition in the history of literature. What would we have done if we had Byron’s memoirs and we were Murray and the fireplace waited?

Strange harmonies of contrasts are the usual thing in art. So if Larkin proclaimed racist sentiments in letters yet spent a lifetime in awe of jazz greats, or ran an effective university library whilst thinking ‘Beyond all this, the wish to be alone’ (‘Wants’), that is the doubleness we are all prone to. For artists there always seems to be the finger pointing, whereby perfection is expected of the artist but never required by the critic. Larkin is seen as squalid, not modern, provincial, by some. For others there are no problems. He says what they feel, and says it plainly.

If Larkin doesn’t have a mind like Emily Dickinson’s—who does—or scorns the Europeans, these are not, in themselves, things that limit the reach of his poetic. Larkin’s modest Collected Poems stands in distinct contrast to silverfish-squashing tomes groaning with overwriting. Larkin is a little like Roethke in that way. Every poem is precise, musical, clear. How infuriating it is that people do not follow artists’ wishes and publish poems never meant to see the light of day. There is a great virtue in Larkin’s kind of selectivity. Capitalism seems to require overproduction of product, and many poets have been happy to oblige. But this surfeit does the poet no long-term favours and usually ensures a partial, or total, oblivion. Tennyson and Wordsworth are great poets who clearly have survived oblivion, but who now reads through all of ‘Idylls of the King’ or ‘The Prelude’.

Larkin’s version of pastoral has its rubbish and cancer, sometimes its beautiful, clear light, its faltering perception of bliss, usually in others. Doubts about the whole poetic project surface occasionally, and what poet doesn’t empathise with that. How easy jazz improvisation seems in comparison to getting poems out and about. No doubt the improvisation comes only after mastery, control. Then comes the apparently spontaneous letting go. But the poet doesn’t see that. He/she is left with the rough bagging of words to get the music through. Larkin’s music is sedate, in the minor key. Wonder amongst daffodils or joy amongst skylarks are pleasures that always seem just over the hill, or flowing round a bend in the Humber as one gets to the embankment. Street lights seem like talismans of death, obelisks marking out seconds, hours, days, years, eternity. Work is a toad crushing you.

A great poet? The comparison with Hopkins is instructive. Hopkins makes us feel the beauty of nature, he makes us confront God’s apparent absence in the dark, or “terrible”, sonnets. It is committed writing in the best sense The language heaves into dense music, sometimes too dense, but you always feel engaged by his best poetry. Larkin is dubious about the whole life show. The world is seen from behind glass, whiskey to hand, or in empty churches, or from windswept plains, sediment, frost or fog lapping at footfall. Hopkins loves his poplar trees; his kingfishers catch fire; weeds shoot long and lovely and lush. Grief and joy bring the great moments of insight and expression, and thus the memorability.

The case of Larkin does raise a fundamental concern regarding art and its place in society. When the upward trudge of aesthetic idealism meets the downward avalanche of political and social reality, what is the aesthetic and political fallout. With Larkin it appears to be a stoic acceptance of status quo nihilism—waiting for the doctor, then oblivion. With Celan, one cannot get further than the Holocaust. For others, a crow is an image of violence, or tulips are weighted with lead. No longer are these images of natural beauty. No doubt, for those who have just seen a contemporary exhibition at Gagosian or been reading about the latest horrors in Darfur, Larkin could seem hopelessly out of touch, and self-pitying to boot. That is not a sensible way of looking at culture. Looking for political correctness in art always leads to disappointment.

Larkin seems to fill the expectations required by late-twentieth century English aesthetics, but I wonder. When younger, I thought Stravinsky the greatest composer of the century I was born into. Now it is Rachmaninov and Prokofiev who give me more pleasure. And I find them no less ‘great’. Robert Lowell seemed the representative poet of his generation when I was at university. Now some of the work reads to me like a bad lithium trip. Does this signify cultural sclerosis on my part? We can’t have a bar of Wagner’s anti-Semitism, but that still leaves the fact of Wagner’s greatness to be confronted. The achievement is so enormous. To use a somewhat dangerous and controversial term of the moment, it shows more than intelligent design. Appeals to the Zeitgeist, a somewhat unreliable indicator of artistic excellence, are last resorts for those who like to give their critiques an apparently incontrovertible seal of approval. In the interim, culture remains dynamic and reputations sink or swim depending on factors having very little to do with intrinsic value.

In Hull Larkin found his haven, the world held warily at bay. However, the world cannot be held at bay for long. The general public want their pound of flesh, and they will take it. Hopkins’ divided soul has passed through mercy, and mercilessness, to a Parnassian plateau. Larkin has entered upon his interregnum, where an uncertain reckoning now takes shape.

The following is the first part of a two-part poem, ‘Larkin Land’, written in 1993.

P03p37           Larkin Letters 

Perhaps this sifted life is right—
The best of him was poetry
Bearing acid vowels
In catalogued soliloquy,

Where art’s unspent revisions
Would liberate, restore;
Trapped in a bone enigma
Ideals could still creep through.

A fifty-dollar lettered life
Can’t give you all the facts.
When one has got a poem just right
Awkward prose seems second-best.

Judgment is mute
When words come from pain—
Beside fierce Glenlivet
These civilised spines

Stare past the face
Of a thousand-year spite;
Annexed by form,
Poems survive the killing night.

So, at end, the cost of verse
Is paid for with this strife;—
Though not asked for, given,
This England mirrored into life.

Written 1993

Monday Musing: Al Andalus and Hapsburg Austria

One probably apocryphal story of the Alhambra tells of how Emir Al Hamar of Gharnatah (Granada) decides to begin the undertaking. One night in the early 13th century, Al Hamar has a dream that the Muslims would be forced to leave Spain. He takes the dream to be prophetic and, more importantly, to be the will of God. But he decides that if the Muslims are to leave Spain, then they would leave a testament to their presence in Spain, to al Andalus. So Al Hamar begins the project (finished by his descendants) that would result in one of the world’s most beautiful palaces, the Alhambra. Muslim Spain was still in its Golden Age at this point, but also just two and a half centuries before the expulsion/reconquest. The peak of the Golden Age had probably passed, with its most commonly suggested moment coinciding with the life of the philosopher ibn Rushd, or Averroes (1126-1198 C.E.).

300pxgranada99towardsalhambra

Muslim Spain plays an interesting role in different contemporary political imaginations. For Muslim reformers, it is an image of a progressive, forward looking and tolerant period in Islam, where thinkers such as ibn Rushd could assert the primacy of reason over revelation. For radical Islamists, it’s a symbol of Islam at the peak of its geopolitical power. For conservatives in the West it is a chapter in an off-again, on-again clash of civilizations. For Western progressives, it is an image of a noble, pre-modern multiculturalism tolerant of Christians and Jews. That is, for the contemporary imagination, it has become the political equivalent of a Rorschach.

250pxboabdilferdinandisabella

I see no reason why I should be different in my treatment of Al Andalus (In all honesty, I react fairly badly, I cringe, when people speak of past cultures and civilizations as idyllic, free of conflict, and held together by honor, duty, and understanding. The only thing I’ve ever been nostalgic for is futurism.) Morgan’s post last Monday on Joseph Roth reminded me of Andalusian Spain, of all things.

The Hapsburg Empire is the other Rorschach for the imagination of political history. The Austro-Hungarian Empire carries far less baggage from their involvement with the present than Andalusia does, but it certainly suffered its fair share. The break up of the Soviet Empire and the unleashing of “pent up” or “frustrated” national aspiration had many looking to the Hapsburgs as a model of a noble, pre-modern multiculturalism.

My projection onto these inkblots of history is something altogether different. In the changing borders and bibliographies of Andalusian and Austrian history, I see societies that reach a cultural and intellectual peak as (or is it because?) they are overcome with panic about the end of their world. A “merry” or “gay apocalypse”, is how Hermann Broch, the author of the not so merry but apocalyptic Death of Virgil, described the period. This sentiment echoes not just in literature but even in a book as systematic as Karl Polyani’s The Great Transformation. Somehow it’s clear, Karl Kraus’ Grumbler, the pessimistic commentator who watches the world go mad and then be annihilated by the cosmos as punishment for the world war in The Last Days of Mankind, was lying in wait long before the catastrophe, that is, during the Golden Age itself.

The early 13th century was hardly a trough for the Moors in Spain, just as the period before World War I was not a cultural malaise for the Austrians, or the rest of Europe for that matter. Quite the contrary. If there is an image that these societies evoke, it is feverish activity, even if it’s not the image that, say, comes across in Robert Musil’s endless description of the society, The Man Without Qualities. Broch would write himself to death in some bizarre twist on Scheherazade.

180pxkarl_kraus_1914

The inscriptions on the Alhambra, such as “Wa la ghalib illa Allah” (“There is no conqueror but God”), are written in soft stone. They have to be replaced, and thereby they require the engagement of the civilization that is to succeed the Moors. Quite an act of faith. While it may be the case that some such as Kraus (or Stefan Zweig) expected the end of all civilization, Austrian thought and writing of the era show a similar faith despite the Anschluss. Admittedly, you have to really look for it. And it certainly did export some of the better minds of the time—including Broch, Polyani, Karl Popper, and Friedrich von Hayek, albeit for reasons of horror and that are to its shame.

It is harder to know what to make of these civilizations, for which an awareness or expectation of their end spurs many of their greatest achievements. There aren’t too many of them. They have in common the fact that they are remembered for relative tolerance, but that could just be a prerequisite to flourish in the first place. Their appeal is, however, clear—as close to an image a society can have of creating, thinking and engaging, even through despair, some way to survive the apocalypse.

Happy Monday.

Random Walks: Past Perfect

I’m a huge fan of the Japanese anime series Fullmetal Alchemist, a bizarre, multi-faceted mix of screwball comedy, heartfelt pathos, and gut-wrenching tragedy — not to mention rich metaphorical textures. It’s the story of two brothers, Edward and Alphonse Elric, who lead an idyllic existence, despite their alchemist/father’s prolonged absence because of an ongoing war. Then their mother unexpectedly dies. Devastated by their loss, with no word from their father and no idea of where he might be, the two brothers take matters into their own hands. They attempt an alchemical resurrection spell — the greatest taboo in their fictional world — to raise her from the dead. They pay an enormous price for their folly: Edward loses an arm and a leg, while Alphonse loses his entire body; his soul only remains because Edward managed to attach it to a suit of armor. The story arc of the series follows the brothers as they roam the countryside, searching for a mythical Philosopher’s Stone with the power to undo the damage and restore their physical bodies.

The series touches on so many universal human themes, but for me the most poignant is the fact that the brothers’ lives are destroyed in a single shattering event over which they have no control: the death of their beloved mother. I’ve been ruminating on this notion of world-shattering of late because this month marks the 100th anniversary of the great earthquake of 1906 that essentially leveled the city of San Francisco, which had the misfortune of being located right at the quake’s epicenter. The shocks were felt from southern Oregon down to just south of Los Angeles, and as far inland as central Nevada, but most of the structural damage and the death — perhaps as many as 3000 lives lost — occurred in the Bay Area. The carefully constructed worlds of tens of thousands of people were literally shattered in just under a minute.Damage6

Like the Brothers Elric, until that fateful morning, San Francisco basked in the glow of its successful transition from tiny frontier town to a thriving, culturally diverse metropolis. The city benefited greatly from the California Gold Rush, as miners flocked there in search of (ahem) “entertainment,” and to stock up on basic supplies before returning to their prospecting. A few lucky ones struck it rich and opted to settle there permanently. The population exploded, so much so that by the 1850s, the earlier rough-and-tumble atmosphere was limited to certain lower-income areas. Elsewhere, theaters, shops and restaurants flourished, earning San Francisco the moniker, “the Paris of the West.”

In 1906, big-name stars like the actress Sarah Bernhard and famed tenor Enrico Caruso performed regularly in the city’s theaters. A local restaurant called Coppa’s was the preferred hangout for a new breed of young Bohemians: intellectuals, artists, and writers like Frank Norris and Jack London. A recent NPR tribute to the thriving arts scene of that time revealed a fascinating historical tidbit: one of the (apparently depressed) regulars at Coppa’s had scrawled a warning on the wall: “Something terrible is going to happen.”

On April 18th, something terrible did happen: the city was rocked by violent tremors in the wee hours of the morning. Emma Burke, the wife of a prominent attorney, recalled in a memoir (part of a fascinating online collection of documents at the Virtual Museum of the City of San Francisco),”The floor moved like short choppy waves of the sea, criss-crossed by a tide as mighty as themselves. The ceiling responded to all the angles of the floor…. How a building could stand such motion and keep its frame intact is still a mystery to me.” Not all buildings remained intact; roofs caved in, and chimneys collapsed. People ran into the streets, fearing to remain in their unstable homes, and thousands camped out in Golden Gate Park. Making the best of a bad situation, some people adorned their crude tents and shelters with handmade signs: “Excelsior Hotel,” “The Ritz,” or “The Little St. Francis.” The Mechanics’ Pavilion became a makeshift hospital, with some 200 patients lying on rows of mattresses on the floor, awaiting transport to Harbor Emergency Hospital.

Despite the devastation, the city might yet have survived, structurally, were it not for the fires that broke out. In Fullmetal Alchemist, the Elric brothers make their situation worse by attempting a taboo resurrection, ignorant of the price that would be exacted. Similarly, some quake survivors attempted to start morning fires, not realizing the danger of their ruined chimneys. Worse, the quake had destroyed the water mains, making it difficult to douse the flames. The fires raged out of control for days; the firefighters had to resort to dynamiting entire blocks in advance of the flames, hoping to create a breach over which the fires couldn’t leap. It wasn’t the most effective method, and by the time the fires were quenched, most people had lost everything, and very few structures remained standing. Many accounts of those who survived speak of the flames burning so brightly that night seemed almost like day. Portrait photographer Arnold Genthe recalled in his own memoir, “All along the skyline, as afar as the eye could see, clouds of smoke and flames were bursting forth.”

We owe a great historical debt to Genthe, who provided a photographic record of the events for posterity. Within a few hours, he had snagged a small 3A Kodak Special camera from a local dealer whose shop had been seriously damaged by the quake, stuffed as many rolls of film into his pockets as he could manage, and spent the entire day photographing various scenes of the disaster, blissfully unaware that the fires would soon destroy all his material possessions.Sfburning

Among Genthe’s more amusing anecdotes is his recollection of bumping into Caruso — who had performed in Carmen the night before at the Mission Opera House — outside the Francis Drake Hotel, one of the few structures that had not been severely damaged by the quake. The proprietors were generously handing out free coffee, bread and butter to the assembled refugees. The great tenor had been forced to abandon his luxury suite clad only in his pajamas, with a fur coat thrown over for warmth. He was smoking agitatedly and muttering to himself, “‘Ell of a place! ‘Ell of a place! I never come back here!” (Genthe wryly observes, “And he never did.”)

Caruso’s loyal valet eventually secured a horse and cart to transport his master out of the disaster area. Others soon followed suit in a mass exodus to escape the flames; thousands streamed toward the ferries waiting to take them across the bay to safety. They fled on foot, carrying whatever salvaged belongings they could manage, or transporting them on various makeshift vehicles: baby carriages, toy wagons, boxes mounted on wheels, trunks placed on roller skates. Genthe recalled seeing two men pushing a sofa on casters, their possessions piled on top of the furniture. He claimed to never forget “the rumbling noise of the trunks drawn along the sidewalks, a sound to which the detonations of the blasting furnished a fitting contrapuntal accompaniment.”

For all the tragic plot points in Fullmetal Alchemist, as much as the Elric brothers continue to suffer, there are still moments of humor, sweetness, and evidence of the elasticity of the human spirit. The residents of San Francisco were no exception. “I never saw one person crying,” Emma Burke recalled. Indeed, the disaster seemed to bring out the best in people, with rich and poor standing on line at relief stations to receive daily rations, and people sharing the few resources they had with those around them, regardless of race or class. Anyone with a car used their vehicle to transport the wounded and dead to hospitals and morgues, respectively. Emma Burke recalled one chauffeur who “ran his auto for 48 hours without rest,” and George Blumer, a local doctor, ran himself ragged for more than week tending to the sick and wounded all over town. There was also a distinct lack of self pity; most people seemed resigned to their plight, accepting the hand Nature had unexpectedly dealt them. Nobody ever said the world was perfect.

That’s not just an aphorism; current scientific thought bears it out. The universe isn’t perfect, although some string theorists believe in the concept of “supersymmetry”: a very brief period of time in which our cosmos was a perfectly symmetrical ten-dimensional universe, with all four fundamental forces unified at unimaginably high energies. But that universe was also highly unstable and cracked in two, sending an immense shock wave reverberating through the fabric of space-time. There may be two separate space-times: the one we know and love, with three dimensions of space and one dimension of time, and another with six dimensions, too small to be detected even with our most cutting-edge instruments. And as our four-dimensional universe expanded and cooled, the four fundamental forces split off one by one, starting with gravity. Everything we see around us today is a mere shard of that original ten-dimensional perfection. Supersymmetry is broken.

Physicists aren’t sure why it happened, but they suspect it might be due to the incredible tension and high energy required to maintain a supersymmetric state. And on a less cosmic scale, symmetry breaking appears to be a crucial component in many basic physical processes, including simple phase transitions: for instance, the critical temperature/pressure point where water turns into ice. It seems that some kind of symmetry breaking is woven into every aspect of our existence.

Paradoxically, shattered symmetries may have made our material world possible. In the earliest days of our universe, there were constant high-energy collisions between particles and antiparticles (matter and antimatter). Because they had opposite charges, they would annihilate each other and produce a burst of radiation. There should have been equal numbers of each — except there wasn’t. At some point, matter gained the upper hand. All the great, beautiful, awe-inspiring structures we see in our universe today are the remnants of those early collisions — the few surviving particles of matter.

The same is true of time. Theoretically, time should flow in both directions. But on our macroscopic level, time runs in one direction: forward. Drop a glass so that it shatters on the floor, and that glass won’t magically reassemble. What’s done cannot be undone. We can’t freeze a perfect moment, but the very impermanence of that perfection is what makes it meaningful.

For all the devastation it wreaks, shattered symmetry also gives the opportunity for rebuilding. Merely a month after the San Francisco earthquake, Sarah Bernhardt performed Phaedre, free of charge, for more than 5000 survivors at the Hearst Greek Theater at the University of California, Berkeley. Other performers followed suit (except for the traumatized Caruso), and within four years, many of the theaters had been rebuilt. In 1910, opera star Louisa Tetrazzini gave a free concert downtown to celebrate the city’s revival. The disaster also laid the foundation for modern seismology, specifically the elastic rebound theory developed by H.F. Reid, a professor at Johns Hopkins University. He attributed the cause of earthquakes to sliding tectonic plates located around fault lines; before then, scientists thought that fault lines were caused by quakes.

One of the Major Arcana cards in the traditional tarot deck is the Tower, depicting sudden, violent devastation that causes the once-impressive edifice to crumble, its symmetry utterly destroyed as it is reduced to rubble. It wouldn’t be described as an especially fortuitous card. But out of the Tower’s rubble comes an opportunity to rebuild everything from scratch, just like the violent environment of our baby universe eventually produced breathtaking celestial beauty. Change is built into the very mechanisms of the cosmos. Like the early supersymmetric universe, perfection is a static and unnatural state that cannot — and probably should not — be maintained. Observes Edward’s mentor, Roy Mustang (a.k.a. the Flame Alchemist), “There is no such thing as perfection. The world itself is imperfect. That’s what makes it so beautiful.”

Below the Fold: Collapsing General Motors and the Dying American Dream, or Washington Fizzles while Detroit Burns

The Leviathan of American capitalism is dying. And what is bad for General Motors is bad for America. But few, aside from Wall Street arbitrageurs casting lots over the firm’s remains, seem to care.

59caddy1General Motors from almost every vantage point was the instrument of the post-World War II American Dream. Peter Drucker’s rigorous analysis of Alfred P. Sloan’s GM empire fueled the development of modern business management theory. Walter Reuther and the United Auto Workers played the part of the exemplary progressive union, driving General Motors into becoming the national sponsor of a business-based welfare capitalism for workers. Guaranteed annual incomes, annual productivity raises, cost of living allowances, health insurance, and corporate-guaranteed pensions, in addition to good wages, were the fruits of forty years of conflict and cooperation between union and the great Goliath. Each side, it can be said in retrospect, exceeded expectations in moving forward the frontiers of collective bargaining to include an American dream for all. Reuther even dared try to negotiate car prices to make cheap transport available to American workers. Charles Wilson, the GM head famous for the “what’s good for General Motors” phrase took progressive business unionism to its heights, sponsoring the first cost of living wage increase clause in the belief that workers needed protection against the wage erosions of inflation.

Good wages, welfare state, and a Chevy under the carport were all made possible for millions of American workers by the unlikely alliance of General Motors and the United Auto Workers. In 1948, only half of all American households owned a car; by 1968, 80% did. Several million other American workers got roughly the same deal pioneered in Detroit.

And there were piles of profits. According to the labor historian John Barnard, Detroit automakers between 1947 and 1967 were getting a 17% annual return on their capital, twice as great as that of any other manufacturing sector. Between 1947 and 1969, automakers earned $35 billion in profits, an astonishing sum in yesterday’s dollars. From the end of the war to the end of American industry’s “golden age” in 1972, the Big Three made over 200 million cars and trucks.

And then the wheels began to come off. Oil crises, recessions, inflation, and the corporate inability to copy Japanese innovations in total quality control started the downward spiral in which General Motors, and to a lesser extent, Ford, find themselves caught up today. Toyota will surpass GM as the world’s largest car producer this year, while Toyota and Honda combined now out-produce GM in America. General Motors now loses $2300 per vehicle; Toyota makes $1500 per vehicle. Each GM vehicle carries $1500 in health care costs, $1300 more per vehicle than a US-made Toyota.

GM lost over $10 billion last year, and has offered to buy out 30,000 of its 113,000 blue-collar workers in the coming year. Thousands of white-collar workers are being severed without any generous terms attached. The firm is selling off a majority interest in its lucrative finance arm, General Motors Acceptance Corporation, as well as much of its holdings in several Japanese vehicle manufacturers.

There are two basic causes for decline. First, GM runs less efficient production lines, taking 34 hours to make a vehicle to Toyota’s 28. Instead of closing the gap, GM is falling further behind, as Toyota is making faster efficiency gains than GM. GM operates at 85% capacity, while Toyota is running at 107% capacity. Coupled with this management failing, secondly, is that while US Toyota’s hourly wages are only 13% lower than GM’s, Toyota’s labor force is smaller, younger, and healthier. Toyota, having only begun producing vehicles in the United States in 1986, also has but a handful of retirees – 1600 to be precise. In contrast, GM has 460,000 retirees whose needs along with those of their families raise the total hourly labor cost for a GM worker to $73, an amount 52% more than an hourly worker cost US Toyota.

The road to car hell for GM is no doubt paved with bad decisions like buying SAAB which continues to go its own way (down); investing in FIAT, and then having to bribe FIAT to avoid having to buy the all-but bankrupt firm; pushing gas guzzling SUVs right into the face of a predictable oil price rise; and missing the hybrid mini-boom. These are just the highlights. It is also hard to understand how management plans to shrink its American operations will enable it to raise more needed capital for investment and to support the pension and retiree health care costs that figure importantly in its unprofitability. One wonders whether the newly announced downsizing is the first step in a business plan that includes eventual bankruptcy, whose proceedings might offer the company the opportunity to shed retirees and their costs, and perhaps much of their employee liabilities altogether.

GM, or for that matter Ford or the UAW, cannot be held responsible for the national indifference to their fate. In 1979, the federal government bailed out Chrysler, floating bonds that allowed the firm to invest in new products, plants, and technology. No hint of a repeat thus far. Nothing more at this point than a letter from two members of Congress to Delphi, parts manufacturer, former GM subsidiary, and key contributor to the GM fiscal mess, urging the firm to engage in good faith bargaining with its unions. Another two members of Congress have filed a bill to prevent a firm like Delphi from dumping labor agreements in bankruptcy court while providing bonuses for bosses and shifting corporate money into offshore accounts.

Why no more than a muffle from Congress? Why silence from the Executive? In part, because saving General Motors and securing its workers would run against the prevailing economic orthodoxy of our time. If General Motors cannot be competitive, whisper the market-mentalists, then to the others should go the spoils of the American auto economy. If Toyota workers in Tennessee are more productive than General Motors workers in Detroit, then, according to dogma, our economy will function more efficiently with more Toyota workers and fewer General Motors workers. We, that elusive we, will be better off, market enthusiasts would tell us, however painful handling these externalities, those expensive retirees, their medical costs, and the medical costs of current workers turns out to be.

Why is bankruptcy the only tool in the kit today, and particularly a bankruptcy process increasingly adept at dispossessing workers and retirees of anything more than lower wages, benefits, and pensions? One reason among the many that is relevant here is that our ruling elite, blindly committed to a concept of free trade that is injurious to the workers in rich and poor countries alike, has kicked away the only real escape ladder for a massive economic and social problem of the sort faced by GM. Under the rules of the World Trade Organization (WTO), a bail-out of the firm would be considered an illegal subsidy violating the terms of the agreement. If the American political elite is going to fight to advance Boeing’s interests against the European Airbus by arguing that the Europeans are subsidizing Airbus, it would indelicate, indeed embarrassing and compromising to be subsidizing American car firms in their battles with Japanese, European, and Korean competitors at home.

Ah, and then another reason is adduced for Washington’s silence in the face of Detroit’s agony. Our elite has moved on: cars are so last century. The capacity of American firms to produce them is seen as of marginal significance in the desire to achieve economic mastery of the world. Information, banking, finance, drugs, and biotechnology are tomorrow’s American advantage, and the WTO rules were fixed so that these industries could expand relatively easily world-wide. Aside from succumbing to quadrennial blackmail by farm bill and held hostage by agro-industrial interests, the only manufacturing industry the US elite aids is the military/defense sector that the government now supports to the tune of half a trillion dollars a year. In addition, the US government provides the military industry with a staff of uniformed sales representatives from the Pentagon and an overseas finance bank that supports its sales. As C. Wright Mills observed fifty years ago, the unholy mixture of the military, politicians, and corporations producing the weapons of war is the basis for the modern American power elite’s political regimes.

Perhaps the Big Three should have stayed in tanks and planes after World War II. Think of the profit margins and political protection they would be enjoying now. Think how an Abrams tank production line would have clarified the elite mind on the matter of saving General Motors.

Also exposed by Washington’s silence is that the elite wants to avoid picking up the tab for the corporate welfare state that General Motors, the United Auto Workers, Ford and Chryslers have built. General Motors has a single-payer health care system: why not simply federalize it? And those of the others? Why not assume the pension systems of the Big Three, ensuring that workers would be paid dollar for dollar what they expected, while using the government’s bonding authority to stretch out the companies’ liabilities?

Of course, our gang’s problem is how the government could help GM, the Auto Workers, Ford, and Chrysler without extending protections to the rest of us. They could be caught in a tricky game because equity issues could trigger an avalanche of resentment on the part of the rest of us, just as easily as it could salve the wounds of a sick corporation and its workers, past and present.

Readers, please take notice. First, this is no plea for economic nationalism. If Toyota USA faced the same problems, the same remedies would apply. It is about workers and maintaining a decent way of life. Second, this is no plea for trade protection. No barriers to trade are recommended. However, a state that ignores the nation’s economy, fails to regulate firms of whatever national origin in the interests of working people, and refuses to pick up the tab for the basic needs of its citizens contemplates both misery and revolt.

Monday, April 3, 2006

Monday Musing: The Palm Pilot and the Human Brain

Today I would like to explain something scientists know well: how computers work, and then use some of the conceptual insights from that discussion to present an interesting recent model of something relatively unknown: how human brains might work. (This model is due to Jeff Hawkins, the inventor of the Palm Pilot–a type of computer, hence the title of this essay.) This may well be rather too-ambitious a task, but oh, well, let’s see how it goes…

Part I: How Computers Work

Screenhunter_5_1Far too few people understand how computers operate. Many professional computer programmers, even, would be hard-pressed to explain the workings of the actual hardware, and may well never have heard of an NPN junction, while the average computer user certainly rarely bothers to wonder what goes on inside the CPU (Central Processing Unit, like Intel’s Pentium chip, for example) of her machine when she highlights a paragraph in MS Word and clicks on the “justify” button on the tool bar, and the right margin of the text is instantly and seemingly magically alligned. This lack of curiosity about an amazing technological achievement is inexplicable to me, and it is a shame because computers are extremely beautiful in their complex multi-layered structure. How is it that a bunch of electrons moving around on a tiny silicon wafer deep inside your machine manages to right-justify your text, calculate your taxes, demonstrate a mathematical proof, model a weather-system, and a million other things?

What’s equally weird to me is that I haven’t ever seen a short, comprehensive, and comprehensible explanation of how computers work, so I’m going to give you one. This isn’t going to be easy for me or for you, because computers are not trivial things, but I am hoping to provide a fairly detailed description, not just a bunch of confusing analogies. In other words, this is going to take some strenuous mental effort, and I encourage you to click on the links that I will try to provide, for further details and discussion of some of the topics I bring up. (The beginning part may be tedious for some of you who already know something about computers, but please bear with me.) Last preliminary comment: I will try to make this as simple as possible, but for those of you who don’t know extremely basic things like what electrons are, I really don’t know what to tell you, except that you should. (The humanities equivalent of this scientific ignorance might be someone who doesn’t know what, say, a sonnet is.) Oh, go ahead, click on “electrons.” I’ll wait.

——————–

Computers are organized hierarchically with layers of conceptual complexity built one on top of the other. This is similar to how our brains work. What I mean is the following: suppose my wife Margit asks me to go buy some bread. It is a simple enough instruction, and she can be fairly certain that a few minutes later I will return with the bread. Here’s what happens in my brain when I hear her request: I break it down into a series of smaller steps something like

Get bread: START

  1. Get money and apartment keys.
  2. Go to supermarket.
  3. Find bread.
  4. Pay for bread.
  5. Return with bread.
  6. END.

Each of these steps is then broken down into smaller steps. For example, “Go to supermarket” may be broken down as follows:

Go to supermaket: START

  1. Exit apartment.
  2. Walk downstairs.
  3. Turn left outside the building and walk until Broadway is reached.
  4. Make right on Broadway and walk one and a half blocks to supermarket.
  5. Make right into supermarket entrance.
  6. END.

Similarly, “Exit apartment” is broken down into:

Exit apartment: START

  1. Get up off couch.
  2. Walk forward three steps.
  3. Turn right and go down hallway until the front door.
  4. If door chain is on, undo it.
  5. Undo deadbolt lock on door.
  6. Open door.
  7. Step outside.
  8. END.

Well, you get the idea. Of course, “Get up off couch” translates into things like “Bend forward” and “Push down with legs to straighten body,” etc. “Bend forward” itself translates into a whole sequence of coordinated muscular contractions. Each muscle contraction is actually a series of biochemical events that take place in the nerve and muscle fibres, and you can continue breaking each step down in this manner to the molecular or atomic level. Notice that most of the action occurs below the threshhold of consciousness, with only the top couple of levels normally available to our conscious minds. Also, I have simplified the example in significant ways, most importantly by neglecting the role of memory retrieval and storage. (There are many retrievals involved here, such as remembering where the store is, where my apartment door is, and even how to walk!) Each subset of instructions in this example is what has come to be known as a subroutine. The beauty of this scheme is that once you have worked out the sequence of smaller steps needed to accomplish a repetitive task which is one level higher in the hierarchy, you can just store that sequence in memory, and you don’t ever need to work it out again. In other words, you can combine subroutines from a given layer into a subroutine which accomplishes some more general task in a higher layer. For example, one could combine the “Get bread” subroutine with the “Get newspaper” and “Get eggs” and “Get coffee” and “Drop off dry-cleaning” subroutines into a “Sunday morning chores” subroutine, which I might then do with little thought every Sunday morning.

This is how computers are able to do such extraordinary things. But I would like to explain some of the detail to you, and the best way to explain it is, I think, again by example. When a user highlights a paragraph of text and clicks on the justify button, here is some of what happens: a subroutine perhaps called “Justify right-hand margin” kicks in. (What that means is that control of the CPU is turned over to this subroutine.) This is what a primitive form of the subroutine might look like (in actual fact, many other things are taken into account) in what programmers call pseudo-code (an informal preliminary way of writing instructions–or laying out an algorithm–which are later carefully translated by the programmer into a higher-level computer language such as BASIC, FORTRAN, Pascal, or C):

Justify right-hand margin: START

  1. First determine the printed width of the text by subtracting the left margin position from the right.
  2. Build a line of text by getting the input (paragraph) text a word at a time. Test to see that the length of the text is less than the printed width.
  3. Output the first word with no following space.
  4. Determine the length of the remaining text, the available space, and the number of word spaces (the same as the remaining words). Divide to get the target word space. (Be sure to take into account the spaces in the string.)
  5. Output the word space, and the next word.
  6. Return to STEP 4 if there is more to print.
  7. Return to STEP 2 for the next line, until no more lines are left.
  8. END.

Of course, FORTRAN or Pascal, or C programmers don’t spend a lot of time actually writing the code (the actual “text” of higher level computer languages is called “code” and the part of programming which takes an algorithm such as the one given above and translates it into the particular syntax of a given language such as C, is called “coding”) for such things, because once they have been written by someone (anyone), they are put into libraries of subroutines and can subsequently be used by anyone needing to (in this case) justify text. Such libraries of useful subroutines are widely available to programmers.

Suppose this subroutine above were written in C. Now what happens to the C code? Who reads that? Well, the way it works is this: there is a program called a compiler, which takes the C code and each of it’s instructions, and breaks them down into a simpler language called assembly language. Assembly language is a limited set of instructions which can be understood by the hardware (the CPU) itself. It consists of instructions like ADD (a, b, c) which, on a given CPU might mean, “add the content of the memory location a to the content of memory location b and store the result in memory location c”. Different CPUs have different instruction sets (and therefore different assembly languages) but the same higher level language can be used on all of them. This is because a compiler for that type of CPU will translate the higher level language into the appropriate assembly language for that CPU. In this way, a program I have written in C to justify text can easily be ported over to a different computer (from a PC to a Mac, say) in the higher level language, without having to worry about how the lower levels accomplish their task. Are you with me? Reread this paragraph if you need to.

Actually, assembly language is itself translated by a program called an assembler into what is called machine language, which is simply a series of zeroes and ones that the hardware can “understand” and operate on. Now we get to the hardware itself. How does the hardware actually perform the instructions given to it? This time, let us start at the bottom of the hierarchy, the silicon itself, and build up from there. Stay with me now!

——————–

P-N Junctions

If certain types of impurities (boron, aluminum or gallium, for example) are added (called doping) to the semiconductor material, in this case silicon, it turns into what is known as P-type silicon. This type of silicon has deficiencies of valence electrons called “holes.” Another type of silicon which can be produced by doping it with different impurities (antimony, arsenic or phosphorous, for example) is known as N-type silicon, aScreenhunter_10nd this has an excess of free electrons, greatly increasing the intrinsic conductivity of the semiconductor material. The interesting thing about this is that one can place these two materials in contact with one another, and this “junction” behaves differently than either of the two types of silicon by itself. A P-N junction allows an electric current to flow in one direction, but not in the other. This device is known as a diode.

Transistors

A transistor is a device with three terminals (a triode–like the glass vacuum tubes of old): the base, the collector, and the emitter. In this device, the current flowing at the collector is controlled by the current between the base and the emitter. Screenhunter_4_3Transistors can be used as amplifiers, but more importantly in the case of computers, as switches. If you take two P-N junctions and combine them, creating a kind of sandwich, you get either an NPN junction or a PNP junction. These both then function as types of transistors, specifically bipolar junction transistors (BJT). In the case of the NPN junction type transistor, the N on one side acts as the emitter, the P is the base, and the other N is the collector. Refer to the diagram above and click here for more info about how exactly these work in terms of the underlying electronics.

Digital Logic Gates

Once we have transistors, we can do something very neat with them: we can combine them into what are known as logic gates. These are best explained by example. Imagine a device with two inputs and one output which behaves in the following way: if a voltage is applied to both inputs, the voltage is also present at the output, otherwise, the output remains at zero. (So if neither or only one of the inputs has a voltage present, the output is zero.) This is known as an AND gate, because its output is positive if and only if the first input AND the second input are “on.” (This “on” state of high voltage usually is used to represent the number 1, while no voScreenhunter_8ltage, or low voltage is used to represent the number 0.) Similarly, the output of an OR gate is “1” if either the first input OR the second input OR both of the inputs are “1”. (Still with me? Good. All kinds of exciting stuff is coming up.) A NOT gate simply reverses a 1 to a 0 and vice versa. There are other logic gates, but we won’t bother with them because they can all be simulated by combinations of something called a NAND gate. This is just an AND gate followed by a NOT gate. In other words its output is 0 only if both inputs are 1, otherwise its output is always 1. (See the “truth table” at the right. A and B are the inputs and X is the output.)

Screenhunter_9The really cool thing here is that one can combine two of the transistors discussed in the previous section to form a NAND gate. (See the diagram at right to see how they are connected together.) And as I mentioned before, NAND gates can then be connected together in ways that can simulate any kind of logic gate.

Not only that, there are ways of connecting NAND gates together to implement any binary digital function for which we can supply a truth table, with as many inputs and outputs as needed. This is known as digital logic and we can use it to, for example, add two binary numbers, each consisting of some fixed number of zeroes and ones. As I am sure you know, we can represent numbers in any base (including our usual decimal numbers) as binary numbers, so this is a very powerful way of manipulating numbers. In fact we can do many amazing things with these types of gates, including evaluating any statements of propositional logic. This is really the conceptual heart of computing.

Screenhunter_11_1By the way, the standard digital logic symbol of a NAND gate is shown here on the right. (The two inputs are on the left, the output is on the right.)

Now, you should have at least a rough idea of how we can use bits of silicon to do things like add and subtract binary numbers by using voltages to represent zeroes and ones. But what do we do with the result? In other words, where do we store things? This brings us to the other major component of computing: memory.

Flip-Flops

A flip-flop is a device which can be used to store one bit (a zero or a one) of information. Can you guess how flip-flops are made? Yep, you got it: following our procedure here of building things from stuff we discussed in the previous section, of course, this time we combine NAND gates in ingenious ways to construct them.

There are various types of flip-flops. A flip-flop usually has one or two inputs, an output, and an input from a clock signal (this is why computers must have clocks–and it is the speed of these clocks which is measured when you are told that your laptop runs at, say, 1.9 GigaHertz, which means in this case that the clock signal flips and flops between 0 and 1, back and forth, 1.9 billion times per second). I will here describe a simple type of flip-flop called an SR (or Set/Reset) flip-flop. This is how wikipedia describes it:

Screenhunter_12The “set/reset” flip-flop sets (i.e., changes its output to logic 1, or retains it if it’s already 1) if both the S (“set”) input is 1 and the R (“reset”) input is 0 when the clock is strobed. The flip-flop resets (i.e., changes its output to logic 0, or retains it if it’s already 0) if both the R (“clear”) input is 1 and the S (“set”) input is 0 when the clock is strobed. If both S and R are 0 when the clock is strobed, the output does not change. If, however, both S and R are 1 when the clock is strobed, no particular behavior is guaranteed. This is often written in the form of a truth table. [See the table at right.]

So, for example, if I want to store a 1 as the output of the flip-flop, I would put 1 on the S input and 0 on the R input. When the clock strobes (flips up to 1) the flip-flop will set to the 1 state as the output. I know this sounds confusing, but just reread it until you are convinced it works. Similarly I can reset it to the zero state by putting 1 on the S input and 0 on the R input. So how are these things constructed out of NAND gates?

I hereby present to you, the SR flip-flop in all its immense digital logic beauty:

Screenhunter_13_1

If you bought a computer recently it may well have come with one billion bytes of internal RAM memory. It takes eight flip-flops to hold one byte of information, and as you can see, it takes eight NAND gates to make this one basic flip-flop which will hold one bit in memory. That’s 128 transistors for a byte of data! Now you know why the original ENIAC computer, which functioned using vacuum tubes instead of transistors, filled a large hall. These days, we can put billions of transistors on a single small silicon chip. (There are ways to make this more efficient, my example is only for rough illustrative purposes.)

There are other flip flops (such as the JK flip-flop) which eliminate the uncertain state of the SR flip-flop when both inputs are 1. There are other improvements and efficiencies which I won’t get into here. (That would be like getting into the choice of materials for head-gaskets while trying to explain how an internal combustion engine works.)

——————–

Screenhunter_14So, starting with just simple bits of silicon, we have seen how we build layer upon conceptual layer (there are armies of engineers and scientists who specialize in each) until we have a processor which can perform arithmetic and logical functions, as well as a memory which can hold the results (or anything else). This is pretty much it! These are the elements which are used to design a machine language for a particular CPU (like the Pentium 5, say). And I have already described the software layers which sit on top of the hardware. I am sure it is obvious that there is much more (libraries-full) to every part of this (for example, I have said nothing about what an operating system does as part of the software layers), but broadly conceptually speaking, this is about it. If you have followed what I have laid out, you now know how electrons zipping about on little pieces of silicon right-justify your text, calculate your taxes, demonstrate a mathematical proof, model a weather-system, and the million other things computers can do.

I am running out of time and space in this column, so I will continue part II next time on April 17th. Look out for it then, and have a great week!

NOTE: Part II is here. My other Monday Musing columns can be found here.

Rx: Thalidomide and Cancer

Rock_brynnerRock Brynner, 54, historian, writer, former road manager for The Band and for Bob Dylan and son of the late actor Yul Brynner, knows both sides of the story of the drug thalidomide. In 1998, after suffering for five years from a rare immune disorder, pyoderma gangrenosum, Rock Brynner took thalidomide and went into remission. With Dr. Trent Stephens, he wrote “Dark Remedy” a history of thalidomide. “I didn’t write the book because I had taken thalidomide,” Mr. Brynner said. He looks and sounds very much like his famous father. “I did it as a historian because this was a story that needed telling.”

The story of thalidomide is not only worth telling, it has gotten substantially more exciting even since 2001 when the interview with Rock Brynner (RB) was reported in the New York Times by Claudia Dreifus (CD). The unique anti-inflammatory properties of thalidomide have been harnessed for treating diseases as varied as multiple sclerosis, arthritis, leprosy, a variety of cancers, AIDS, and many other chronic and debilitating illnesses such as p. gangrenosum that Mr. Rock Brynner (RB) suffers from. The story began in 1950’s. Since pathogens were considered to be the underlying cause of most human diseases, scientists were racing to find new antibiotics. At this time, an ex-Nazi officer, Heinrich Mückter became in-charge of the research program for the company Chemie Grünenthal, and working with Wilhelm Kunz, obtained what looked like a promising compound. Unfortunately, this new drug which they named thalidomide had no effect as an antibiotic, anti-histamine or anti-tumor agent in rats and mice, and in fact they could not find a large enough dose to kill the animals. A logical conclusion would have been that the drug has no effect. The investigators however concluded that the drug has no side effects. The question of course was what the drug could be used for. Because the structure resembled that of barbiturates, thalidomide was tried as a sleeping pill and was indeed found to be effective, eventually being sold in 46 European countries as the safest sedative.

Reassured by its safety record in animals, and because of its anti-emetic effects, pregnant women began to take thalidomide freely as a cure for morning sickness. This is when its catastrophic effects began to surface as infants were born with flipper-like limbs; hands and feet attached to the body without arms or legs, later known as thalidomiders.

RB: As a historian, I look at thalidomide in its context. The 1950’s were a time of unquestioning infatuation with science. Science and technology had defeated the fascist threat. In the cold war, science was seen as protecting our lives. The thalidomide scandal exposed us for the first time to the idea that powerful medicines can destroy lives and deform babies. Before that, medical folklore held that nothing injurious could cross the placenta.

Screenhunter_2_3As many as 20% adults taking thalidomide began to experience tingling and burning in fingers and toes with signs of nerve damage. Tragically, 40,000 individuals suffered from peripheral neuritis, and 12,000 infants were deformed by thalidomide, 5000 surviving past childhood, before the drug was finally with-drawn. It was later shown that the reason animal studies did not manifest any side effects was because thalidomide is not absorbed in rats and mice. Thanks to the heroic stance taken by Dr. Frances Kelsey at the FDA, thalidomide was never approved for use in the US, a stance for which she eventually won the President’s Award for Distinguished Federal Civilian Service in 1962.

CD: Why did you take thalidomide?

RB: I was fighting for my life, as almost everyone who comes to thalidomide is. Everything else paled beside that. In the film version of Dostoyevsky’s ”Brothers Karamazov,” Dmitri Karamazov wakes a pawnbroker, who says to him, ”It’s late.” To which Dmitri answers, ”For one who comes to a pawnbroker, it is always late.” Well, I was at the pawnbroker’s, and it was late. For five years, I had battled a mysterious, rare disease, pyoderma gangrenosum, where huge wounds on my legs kept growing larger and wouldn’t heal. I had taken, at different times, cortisone, methotrexate, cyclosporine; none worked for long. My immune system was tearing up my skin anywhere I had a wound. Thinking practically, I was planning to end my life because, if we couldn’t stop this, all my skin would be eaten away. Then my dermatologist mentioned anecdotal reports from Europe that thalidomide had been effective with pyoderma. I went to the medical library and read all I could. The rationale made sense: I had this autoimmune condition, in which one immune element, T.N.F.-alpha, was running amok in me for reasons unknown. Thalidomide represses that T.N.F.-alpha response. Fortunately, thalidomide did work for me.

In 1964, Dr. Jacob Sheskin, a Lithuanian Jew, was working with lepers in Jerusalem when he saw an extremely debilitated patient suffering from erythema nodosum leprosum (ENL) type of leprosy, who had been unable to sleep due to severe pain. Dr. Sheskin found an old bottle of thalidomide in his medicine cabinet, and gave two tablets to the patient who then slept better than he had in months. After another two tablets the following night, his lesions began to heal and it is to Dr. Sheskin’s credit that he made the association between the patient’s dramatic improvement and thalidomide. He had to contact Muckter to obtain thalidomide for a larger study. Eventually, the World Health Organization (WHO) confirmed a total remission of the disease in 99% of thousands of lepers treated in 52 countries. This is how and why, despite the sickening medical catastrophe associated with thalidomide, this drug never disappeared completely and was approved for the treatment of ENL in the USA by the FDA in 1998.

CD. A personal question. You are the son of Yul Brynner. As I was reading your book, I wondered if it was difficult to form an identity that was clearly your own.

RB. Well, I’ve had a separate identity for some time now. At one time or another I’ve written and starred in a one-man show on Broadway, earned an M.A. in philosophy and a Ph.D. in history, was bodyguard to Muhammad Ali, road manager for The Band and Bob Dylan and computer programmer for Bank of America. I’ve also written six books. My latest, about the subjective experience of time, is going out to a handful of publishers next month. These interests were all driven by my voracious curiosity more than a search for identity. Yes, it’s difficult for the children of iconic figures to establish independent identities. But with all the suffering in this world, I wouldn’t shed too many tears for those who had privileged youths. I had wonderful parents, especially through childhood. Later on, they both went a little crazy at times.

Thalidomide has now been tried in more than 130 human diseases, and at least 30 different mechanisms of action have been ascribed to the drug. Yet, the precise manner in which it exerts its anti-neoplastic effect remains unknown. In 1991, Dr. Gilla Kaplan of Rockefeller University in New York showed that TNF levels were very high in the blood and lesions of leprosy patients and that thalidomide reduced these levels by as much as 70%. In addition, Dr. Judah Folkman at Harvard Medical School showed that thalidomide can arrest the formation of new blood vessels by shutting off some necessary growth factors. The teratogenic effects on the fetus, which can occur following the ingestion of a single tablet of thalidomide at the wrong time (a 7-10 day window during the first trimester of pregnancy), turn out to be because thalidomide can stop the formation of new blood vessels or neo-angiogenesis. Finally, the drug also has a variety of effects on the immune system.

I have written previously about how cancer cells alter their microenvironment in such a way that it supports their growth at the expense of that of their normal counterparts. Such alterations may involve angiogenesis, production of TNF and abnormalities of immune regulatory cells, some of which are also the source of TNF. Thalidomide is a drug that is capable of affecting all three of these abnormalities in the malignant microenvironment, as well as having an effect on the cancer cells directly. True to form however, the introduction of thalidomide into cancer therapy did not happen as a result of logical planning, but rather dramatically as the result of one woman’s persistence. The wife of a 35 year old patient suffering from multiple myeloma, a hematologic malignancy with evidence of increased blood vessels in the bone marrow, was frantically searching for ways to save her husband. During her research, she came across Dr. Folkman who advised her to try thalidomide. She convinced her husband’s oncologists in Little Rock to do so. Although the patient himself did not benefit from the drug due to the advanced stage of his disease, several other patients treated subsequently did.

At the same time, our group had been investigating another hematologic malignancy, the pre-leukemic disorders called myelodysplastic syndromes (MDS). We had demonstrated that the primary pathology underlying the low blood counts in this disease is an excessive death of bone marrow cells caused by high TNF levels. In addition, there is also evidence of marrow neo-angiogenesis in MDS. We hypothesized that thalidomide could be a useful agent in this disease, and in 1998, treated 83 MDS patients, and showed that a subset responded, the majority of responders going from being heavily transfusion dependent to being transfusion independent.

CD: Do you think there will ever be a time when thalidomide stops being such a charged word?

RB: No. Because of its threat, everyone is working hard to keep the threat of thalidomide well known, especially Randy Warren, a Canadian thalidomide victim. He was the one who insisted that a picture of a deformed baby be on every package, that patients be obliged to watch a tape of a victim speaking and that the name never be changed or disguised with a euphemism. First and foremost, thalidomide deforms babies. Second, remarkably, it can save lives and diminish suffering. But everyone is working to eliminate thalidomide. As long as it exists, there’s a threat.

Thankfully, a safer substitute has now been developed. This drug called Revlimid, which is less toxic and more potent than the parent drug thalidomide, is proving to be highly beneficial to patients with MDS and multiple myeloma. Most importantly, there are no untoward effects on the growing embryo. In a surprising twist, MDS patients who have a specific abnormality affecting chromosome 5 appear to be specially responsive to Revlimid, and the drug has recently received FDA approval for use in this type of MDS. Maybe thalidomide can finally be retired forever. As Randy Warren said, “When that day comes, all those involved in the suffering can gather together for thalidomide’s funeral.”

Recommended reading:

  • Stephens T, Brynner R. Dark Remedy: The impact of thalidomide and its revival as a vital medicine. Perseus Publishing, Cambridge, MA. 2001.
  • Raza A et al. Thalidomide produces transfusion independence in long-standing refractory anemias of patients with myelodyplastic syndromes. Blood 98(4):958-965, 2001.
  • List AF et al: Hematologic and Cytogenetic (CTG) Response to Lenalidomide (CC-5013) in Patients with Transfusion-Dependent (TD) Myelodysplastic Syndrome (MDS) and Chromosome 5q31.1 Deletion: Results of the Multicenter MDS-003 Study. ASCO May 7, 2005
  • Raza A et al: Lenalidomide (CC-5013; RevlimidTM)-Induced Red Blood Cell (RBC) Transfusion-Independence (TI) Responses in Low-/Int-1-Risk Patients with Myelodysplastic Syndromes (MDS): Results of the Multicenter MDS 002 Study. 8th International Symposium on Myelodysplastic Syndromes May 12-15, 2005 Nagasaki, Japan.

All of my Rx columns can be seen here.

monday musing: the radetzky march

It’s been noticed by more than one person that Walter Benjamin had a melancholy streak. But Benjamin’s melancholy has often been misunderstood. as a form of nostalgia, a lament for things lost to the relentless march of history and time. It’s true, of course, that some melancholics are nostalgic. Nothing prevents the two moods from going together. But Walter Benjamin’s melancholy wasn’t that kind at all. He happened to think, surprisingly enough, that melancholy is at the service of truth.

That’s quite a claim. It sounds both grand and unapproachable. For Benjamin, though, it was almost a matter-of-fact proposition; it was so intuitive to him, it came as second nature. Benjamin thought that melancholy is at the service of truth because he thought that things, especially complicated things like periods of history and social arrangements, are hard to understand until they’ve already started to fall apart. The shorthand formula might be: truth in ruins. The type of person who sifts through ruins is the melancholic by definition. Such a person is interested in the way that meaning is revealed in decay. In a way, the Benjaminian melancholic is darker even than the nostalgist because the nostalgist wants to bring something back, whereas the melancholic is best served by the ongoing, pitiless work of death.

Durer_1Benjamin was always fond of Dürer’s engraving, Melencolia. In Melencolia, a figure sits amidst discarded and unused tools and objects of daily life. It appears that the world in which those tools made sense, the world in which they had a purpose, use, or meaning, has somehow faded away. The objects lay there without a context and the melancholic figure who gazes at them views them with an air of contemplation. The collapse of the world has become an opportunity for reflection. Truth in ruins.

It’s impossible not to think that Walter Benjamin was so fascinated by melancholy, ruins, and truth because he, himself, had come of age in a period where a world was passing away. For Central Europeans (and to a less extreme extent, the West in general), the end of the 19th century and the beginning of the 20th was the collapse of an entire world. In one of the more moving and epic sentences ever written about that collapse as it culminated in the Great War, Benjamin once penned the following: “A generation that had gone to school in horse-drawn streetcars now stood in the open air, amid a landscape in which nothing was the same except the clouds and, at its center, in a force field of destructive torrents and explosions, the tiny, fragile, human body.”

***

All of this is by way of a preface to the fact that I just finished reading Joseph Roth’s amazingly brilliant, beautiful, sad novel, The Radetzky March. The Radetzky March follows the fortunes of three generations of the Trotta family as the 19th century winds down into the seemingly inevitable, though nevertheless shocking, assassination at Sarajevo. As the critic James Wood notes in his typically powerful essay “Joseph Roth’s Empire of Signs,”

In at least half of Roth’s thirteen novels comes the inevitable, saber-like sentence, or a version of it, cutting the narrative in two: ‘One Sunday, a hot summer’s day, the Crown Prince was shot in Sarajevo’.

Roth is always writing through that moment, through the shot that cracked out on that hot summer’s day. As Wood points out, Roth became the self-appointed elegist of the empire that had come to an end at Sarajevo. The men of the Trotta family are bound to that empire in a descending line of meaninglessness and helplessness that itself tracks the dissolution and collapse of the world within which they lived. The very song, “The Radetsky March”, becomes a mournful ruin in sound. At the beginning of the novel it can still stand as a symbol for the ordering of life that holds the world of the Austro-Hungarian Empire together. By the end of the novel, it is a relic from a bygone age, and is the last thing that the youngest member of the Trotta family hears as he is gunned down ignominiously in the brutal and senseless fighting that opens the First World War.

What a profoundly and beautifully melancholic work. All the more so because it is melancholy in the service of Benjamin’s truth and not in the service of nostalgia. The Radetzky March is about how meaning operates, about how human beings come to see themselves as functioning within a world that coheres precisely as a world. In the end, Roth is essentially indifferent as to whether that world was a good or a bad one. Like all worlds, it can only cohere for so long. Instead, he focuses on laying bare its nature and its functioning in the moments where it began to break apart. Here, he is like the melancholic figure in Dürer’s engraving. The ‘tools’ of the Austro-Hungarian Empire lay around him as they’ve been discarded by history while Roth sifts through the ruins, contemplating what they were and how they worked.

That is something that Wood gets a little bit wrong, I think, in his otherwise brilliant essay. Wood takes the elegiac moments in Roth’s writing, which invariably come from the mouths of those serving the Empire, as words of longing and approval that are endorsed by Roth. But there is something more subtle and complicated going on. Wood touches on it briefly in his comment about Andreas, an organ grinder in Roth’s Rebellion. Wood writes, “It is the empire that gives him authority to exist, that tells him what to do and promises to look after him. In Roth’s novels, marching orders are more than merely figurative. They are everything.”

To put it in Kantian terms for a moment, that is exactly what Roth is doing, showing the Empire as ‘the everything’, the transcendental horizon within which human beings understand themselves and their relations to everyone else. Like Walter Benjamin, Roth has adopted this broad transcendental framework while jettisoning the strict a priori method that made Kant’s transcendental method a-historical and purportedly universal. Roth has come to see, indeed witnessed with his own eyes, that transcendental horizons of meaning are themselves historical; they fade away, they fall into ruins, and are reconstituted as something new.

It’s kind of interesting in this quasi-Kantian vein to reflect that Roth takes a marching song as his symbol for the coherence of the transcendental horizon of meaning. Kant himself started with space and time, noting rather reasonably that without space and time, you are without a framework for apprehending anything at all. Things have to be ‘in’ something, transcendentally speaking, and space and time are the broadest, most abstract categories of ‘inness’ that one is likely to find. Since Kant was after the broadest and most universally applicable set of rules that govern knowledge of the external world, it seemed a lovely place to start.

But Kant wasn’t much of a melancholic. The Sage of Königsberg thought that he could provide his set of categories for the understanding and that would be that. The content gets filled in later. History is always a posteriori, a matter of particulars. Roth’s transcendental ground, by contrast, is shot through with content and history. It’s a march, a specific song from a specific time and place. But it is no less transcendental for being so. For what is a march but a means for ordering space and time? The Radetzky March is thus more than a symbol for the ordering of the Austro-Hungarian world: it is part and parcel of that very ordering. It’s a transcendental object made palpable and tangible. And it’s one that gives up its transcendental secrets precisely as it fades into ruin. As Benjamin once wrote, “In the ruins of great buildings the idea of the plan speaks more impressively than in lesser buildings, however well preserved they are.” That’s the method of the melancholic, the historical transcendentalist. It’s fitting that it was put into practice at its highest level by a novelist chronicling the end of his world.

Monday, March 27, 2006

Temporary Columns: Islam, the West and Central America

I recently attended a conference on Central American peace processes in Toledo, organised and sponsored by the Project on Justice in Times of Transition and hosted in Spain by the Centro International Toledo para La Paz. The conference brought together many of the key participants in the peace processes in Central America during the mid-80s to the early-90s. They included ex Presidents Vinicio Cerezo of Guatemala and Jose-Maria Figueres of Costa Rica, former military commander of the guerilla Frente Marti para Liberacion Nacional, Joaquim Villalobos, and former head of the Sandinista Army, General Joaquin Cuadra, former Head of the Guatemalan Army General Julio Balconi, and Sir Marrack Goulding who was Under-Secretary General for Political Affairs at the United Nations at the time, among others. The conference was both a retrospective exploration of the Central American peace processes as well as an effort to glean lessons for efforts at making peace in other places in the world.

The Central American peace processes of this period had a significant impact on how we conduct peace processes in the world today. Many developments that have become commonplace in peace processes around the world were refined, if not first tried, in Central America. They range from the widespread involvement of the United Nations on a regional basis, and the development of human rights monitoring, to the setting up of Truth and Reconciliation Commissions and programs for Disarmament Demobilisation and Re-integration of former combatants. However, I want to emphasise another element of the Central American Peace Processes of this period – their contribution to attenuating, if not ending the Cold War.

Portrait_hrWhile the Central American region shaped the context in which each country – whether Nicaragua, El Salvador, or Guatemala – tackled its civil conflict, the cold war shaped the context in which the region dealt with its problems. But Central American leaders searching for peace were not daunted by the global divide we then called the Cold War. They did not feel they had to wait for the Cold War to end to resolve the conflicts in their region, either individually or regionally. The process began under the leadership of President Oscar Arias of Costa Rica, who first convinced other leaders in the region that they all needed stability to progress economically, and then convinced the United States to give diplomacy and negotiation a chance. As Arias put it in an interview with El Pais (16 March 2006): “20 years ago Central Americans were killing each other. The superpowers provided the arms and we provided the dead. …After the defeat in Vietnam the US needed to win a war. They wanted to get rid of the Sandinistas from power in Nicaragua by military force. I told the US that is not the solution to differences, rather what is necessary is diplomacy and negotiation.”

So Central Americans managed to make peace in their own countries, if not contribute to the end of the Cold War, by demonstrating how particular conflicts seen as sites of political and ideological contestation on a global scale, could be recast as conflicts with their own dynamics that required a particular set of solutions.

GeorgebushOsama_bin_ladenToday we are being asked to choose sides in yet another great global divide – between the West and Islam. We are also told that Iraq, Israel-Palestine, Afghanistan, Syria, Lebanon, Indonesia, Egypt, Saudi Arabia, and even Europe, among many other places, are sites of great contestation between these two value systems. One approach is to view these as indeed sites of great contestation between Islam and the West, pick the side you are on, and proceed to fight it out with the other side in each particular place. Central America suggests a different approach. You do not have to deny the presence of such a “global divide” to tackle each problem separately. And tackling each problem separately may help resolve the global divide.

So Iraq then becomes less a place where the best of the West is contesting the worst of Islamic radicalism, but a country undergoing a triple transition – from Saddam Hussein’s Baath party dictatorship to multiparty democracy, from a Sunni dominated state to a multiethnic one, and from US occupation to self-government. And addressing each of these transitions has less to do with where we stand on the Islam-West divide, than with what techniques we can use to address them and lessons we have learned from other places that can help us do so.

Similarly, the Israeli-Palestinian problem becomes a challenge of ending the occupation of a people, and installing a functioning democracy to govern themselves, while developing a viable economy that will sustain their lives. It is not a place where an outpost of the West is facing Islamic hostility. Saudi Arabia can be viewed as the challenge of transitioning from a theocratic kingdom to a more plural state. And Afghanistan becomes the challenge of restoring basic institutions that can function in a country that has been ravaged by war and flattened by bombs for more than 25 years. Syria and Egypt are by contrast straightforward. They require a process for electing a representative government. The issue of Islam in Europe becomes how you include marginalized immigrant communities into the socio-economic and political mainstream of a number of countries who first came as guest workers, but now feel that they are neither guests nor workers.

All of these challenges are familiar to us, not because we have always been successful in addressing them, but because we have dealt with them before in other parts of the globe. By dealing with the parts (democratic transition, immigration, pluralism, building institutions) of the divide between Islam and the West, we need not deny that there may be a whole to it as well. We need only deny that it is clear how much greater the whole is to the sum of the parts. So we do not need to always address the whole in order to tackle each part. This is one important lesson we can learn from the Central American leaders of the 80s, whether government or rebel, who took on another global divide, part by part.

Oscar Arias, the then President of Costa Rica and Nobel Peace Prize winner, has just been re-elected President of Costa Rica after 20 years. He is promising a “Costa Rican Consensus” that will contribute to steps to end poverty and lead to military disarmament world wide. Given his contribution to peace in Central America and the end to the Cold War, I would like to add one more thing to his agenda – bringing an end the new global divide between Islam and the West, part by part.

Talking Pints: Iraq and the Law of (Misleading) Averages

SaddamAn oft heard remark about Iraq today (at least where I hang out) is something along the lines of “Well, it may be bad over there, but at least they (the Iraqi people) are better off than they were under Saddam.” Such a response strikes me as simultaneously reasonable (it may be true) and false, insofar as it may be little more than the ‘last line of defense’ justification of many folks for what is increasingly seen as a losing proposition. Bush’s recent declaration that finishing the war will be effectively ‘someone else’s problem,’ seems only to strengthen the latter interpretation. But let’s take the claim of “at least they are better off than they were under Saddam” seriously for a moment. For if it is true, then one might hope that the future is not so bleak after all.

There seem to be (at least) two issues tied up in the statement that “they are better off than they were under Saddam.” First, that the ‘quality of life’ of the Iraqi people is, on average, better, with standard indicators such as per capita GDP, and the number of people receiving basic services such as electricity etc. moving in the right direction since the invasion. Second, that regardless of the quality of life, one’s actual life is better preserved today, despite the violence that seems ever present, than under the old regime. While seemingly appealing, the problem with both claims, I suggest, is that they tend to rest, at least implicitly, on calculations of ‘averages’. Unfortunately, focusing on such indicators and sampling for averages to make meaningful comparisons may hide more than it illuminates.

Iraq_electric_1Take the first claim, that quality of life indicators are (on average) moving in the right direction. If one examines the available statistics then the picture presented seems to back up this claim. Regarding GDP growth, the US Department of State notes that a year before the invasion “Iraq’s per person income had dropped from $3,836 in 1980 (higher than Spain at the time) to $715 [in 2002] (lower than Angola),” which is pretty poor by any standard. In contrast, in 2005 the State Department reports that “Iraq’s GDP is projected at $29.3 billion…up from $18.4 billion in 2002.” Moreover, “the IMF projects Iraq’s economy to grow by 10.4% in 2006.” Regarding electrical power as another indicator of progress, the same Department of State report notes that “more than 2,000 megawatts (MW) of generation capacity have been added or rehabilitated. One hundred fifty planned and ongoing projects worth $800 million will add more than 600 MW of additional generation capacity and improve the distribution of power to more than 2.1 million people.”

These are indeed successes, but does it mean that Iraqi’s are better off, on average, since the invasion? If it is at least plausible that the sanctions placed on Iraq by the UN from 1991-1999 lowered GDP by as much as 75 percent, or some equally large amount, then the recovery to the current level of per capita GDP of $3,400 seems somewhat less impressive. Moreover, confusion abounds as to what the real figures for Iraqi GDP actually are. For example, the CIA estimates the per capita GDP of Iraq in 2001 at $2,500.00 and in 2003 at $1,600.00, which makes the 2002 figure of $715.00 used by the State Department seem rather deflated. Regardless, similar to the ‘miracle of Reaganomics’, if you throw yourself out of a building and break both your legs (in 1980-82), the ability to crawl away on your elbows (in 1984) could be considered a success -– on average.

Sriimg20040402_4841633_0Regarding electricity supply, the recent growth in Iraqi generation capacity has to be seen against, not just recurrent insurgent attacks, but against the decrepit state of Iraqi infrastructure at the time of the invasion (due in part to sanctions), and the orgy of looting that eviscerated what was left of that infrastructure in the first six weeks after the invasion. Seen in this light, what restored power there is may be far less than is needed, with some estimates arguing for 6 gigawatts of new capacity to meet current demand in the context of a projected current shortfall of 1.1 Gigawatts despite the new capacity noted above. On average then, electricity supply may be higher today than it has been since the mid 1990s – on average – but that’s not saying much.

Third, what is making Iraq better off are not its oil revenues, which are up but wholly insufficient to rebuild, or the export of dates (the export success of the past few years apparently), but massive foreign (US) aid. Given the falling popularity of the war in the US and the long run costs of the war being projected as being as high as two trillion dollars it is unlikely that this ‘development by aid’ can be supported in the long term. Given all this, even if the optimistic statistics and projections are correct, which they are unlikely to be (given the bogusness of most forecasting), then it is hard to unambiguously make the case that the Iraqi’s are materially better off, on average, than they were under Saddam. Specifically, since the current recovery is contingent upon unlimited foreign largess that can disappear rather quickly, thus skewing the average quite drastically, it is not clear that the average a year or two from now will be anything like the average today.

What then for the other claim, that physical security is better now than under Saddam? Here the picture is equally complex. If we are to compare the threat to individual life today to that during the old regime, we must remember when Saddam et al., committed the majority of these murders rather than average over the life of the regime. Doing so would be to average out, for example, deaths in the Soviet Union over 70 years, and thus equally blame Stalin and Gorbachev. The problem is that ‘average’ deaths ‘now’ versus ‘then’ are a problematic indicator for comparison, (even if, unlike the USSR, Saddam was in power for the whole period).

Iraq_1Consider that the major ‘killing periods’ of Saddam’s regime were the ‘Anfal’ campaigns against the Kurds in the late 1980s where it is estimated 180,000 were killed, and the 1991 revolt where some 60,000 were killed. Add in the 500,000 Iraqis slaughtered during the Iran-Iraq war, plus the estimated 50,000 or so people murdered at other points, and you end up with about 790,000 people killed. [Photo shows Kurdish victims of the poison gas attack at Halabja.]

Compare this to the numbers given in The Lancet study of 2004 (or the UN study of 2005) where it was estimated that 100,000 had died as a result of the invasion (slate), or the (more documented and less estimated) study of Iraq Body Count of between 33,000 and 38,000 deaths since the invasion, and it seems quite simple to conclude that one’s safety today is greater than it was under the old regime. Indeed, one estimate places the old regime death rate at “between 70 and 125 civilian deaths per day for every one of Saddam’s 8,000-odd days in power.” (In contrast, see the Bode Miller problem I discussed last month). As such, things may, on average, be better today than the were in the 1980s or 1990s, but we should not expect anyone unfortunate enough to be living Iraq today to calculate their position relative to the average and be thankful for it.

Specifically, the data in Iraq, especially regarding political murders, is especially lumpy. Rather than their being an average number of murders per day by Saddam that people could expect, there were brief periods of intense violence punctuated by long periods of relative inactivity. In contrast, what we seem to have in Iraq since the occupation is constant and increasing levels of violence, even if the average rate is lower. Which situation then is harder to deal with? Reflecting on my wife’s family’s experience made me think about this question.

StasiMy wife was born and raised in East Germany, Stasi and all. Indeed, her uncle spent a few years in prison for going against the regime. However, the rest of her family did not. The reason was simple. Dictatorship or not, the East German regime had certain rules and norms that were obvious to all its citizens. If you obeyed these, you had a Stasi file like everyone else, but probabilistically, if you did not break the rules or violate the norms, you were left alone. There was little about the lived environment that was radically uncertain. Saddam’s regime may have been far more vicious than the former East German communists, but given the lumpiness of the data on who was killed and when, it may have been a reasonable assumption that if you were neither in the army in the 1980s, not a Kurd or a Southerner in revolt in the early 1990s, you had a reasonable chance of being left alone.

But what about the situation today? In a previous post I argued that social scientists’ predictions are under-determined by the facts and over-determined by our theories (old post). Something similar may affect normal people as well as social scientists. The world is an immensely complex place and we tend to assume it to be a far more stable place than it is. We do so because the stability that we take for granted is itself a social product, the result of intersubjective norms, institutions, rules etc. that we reproduce in our daily routines.

Iraq today may be far less bloody and far more wealthy – on average – than it was under Saddam, but it is also far more random. Given such a constant (as opposed to lumpy) level of random violence, old certainties no longer apply, old institutions no longer operate, and old norms are routinely violated. People (Iraqi or American) do not deal well with such environments and try to reduce this uncertainty by acting to protect themselves against it. In doing so they promulgate new norms (usually based on old scripts) such as re-imagining group membership on, for example, ethnic or religious lines, as seems to be happening in Iraq. Doing so may of course increase other agents’ uncertainty and thus ratchet up the violence, for every new ‘in-group’ there has to be a corresponding ‘out-group’, but it is a coping mechanism nonetheless.

Overall then, conditions in Iraq may be ostensibly better today than they were in the past, on average, but they may feel worse, and that’s what counts. Even though the body count is lower, even though there is more electricity, and even if there is more wealth in the country, such factors, and focusing on such factors, may be less important to understanding where Iraq is heading than we think. The Iraqi people “may be better off now then they were under Saddam”, but if it doesn’t feel any better to the people on the ground, we should not expect less bodies and more wealth –- on average –- to really make a difference.

Sojourns: True Crime

150pxnatalee_holloway_yearbook_photoAn eighteen-year old girl drinks heavily at a bar. She leaves with three boys about her age. No one ever sees her again. Her body is never found. Such is the ordinary stuff of crime across the world: a victim and her suspects caught in a prosaic mixture of sex and violence.

Add to that a few elements I’ve left out of my description, however, and we have the stuff of media sensation and obsessive interest: A blond American girl goes to Aruba to celebrate her high school graduation. The night before she is to fly back, she drinks heavily at a local bar. She gets in a car with three locals. She is not at the airport the next morning. Her body is never found.

The facts of the crime remain the same. The temper of the response alters dramatically.

Almost a year later, Natalee Holloway still commands our attention. Small developments in the case are breaking news. The characters are all well known: the grieving and irate mother; the coddled major suspect; the various local authorities. Several have given long interviews on national television; all have lawyers, perhaps one or two have secured agents. As with the runaway bride and Hurricane Katrina, the story itself has become a story, an occasion for the media to examine the way in which it packages and serves up the news. Why do we care about one girl’s disappearance when so much of graver consequence happens all the time? Why Natalee Holloway? 

One answer to this question has been the much-discussed “missing white girl syndrome.” A blond and attractive teenager disappears and all sorts of conscious and unconscious associations are made. Natalee Holloway swiftly turns from a particular individual, with thoughts and desires and experiences of her own, to an iconic vision of American girlhood: blond, young, pretty, and almost certainly dead. Like many things, our icons are easier to see in their twilight. Natalee is somehow blonder in repose. And so the story isn’t really about one person’s disappearance. It is about everything that is conventionally American thrown into horrible distress, apple pie tossed to the wolves.

Lurking below the interest in iconic American girlhood is something darker and less easy to talk about, at least on prime time cable. Natalee may or may not have been raped. She may or may not have had consensual sex with one, two, or three boys. One of them licked Jello shots off her stomach earlier in the evening. This much is known. She left a tourist bar named “Carlos ‘n Charlie’s” at around 1:30 am on May 30th, 2005. Her last recorded act was to get into a car with the three suspects. After that, we are left to our bleakest imaginations. In other words, the Natalee story lingers in part because of its strong undercurrent of sex and mayhem.

Natalee’s blondness and our penchant for erotic mayhem are not so separate. They are two sides of the media frenzy that has become the Natalee Holloway story. We turn girls into icons and then like to think of them in the most degraded of circumstances. Even a casual observer of trends in recent pornography knows this all too well. Prurience and voyeurism are intrinsic to this case and central to its apparently unending allure. Our white girl has not simply gone missing. She is now at the dimmer reaches of what we can speak about and what we can imagine. The combination is toxic and intoxicating. 

To these associations, I would add one more element that is essential to the Natalee phenomenon. The crime remains without a body, some of the most basic facts available only through conjecture and inference. This way it is both a perfect and flawed crime story. The same public that watches Greta Van Susteren incessantly dissect the case on On the Record tunes in regularly to CSI, where virtuoso experts discover incriminating evidence on or about the corpses of victims. But Natalee’s body is still out of reach of criminology and forensic science. Nothing is resolved or certain. Natalie did or did not have sex, was or was not raped, died by accident or met foul play. According to the latest version of events, she may have expired from an overdose of alcohol and drugs. Without a body, there is no way to know for sure.

Nataleeholloway_1Natalee still holds her secrets. Irresolution and uncertainty allow for the infinite variety of crime-narratives to play themselves out—among talking heads, in our imaginations. Yet irresolution and uncertainty also frustrate an audience that expects closure. We have grown used to bodies that talk to the police and doctors and scientists. The Natalee Holloway story places her body at the center of events—she was or was not inebriated, did or did not have sex, met or did not meet with violence—yet renders it disturbingly mute.

We may never hear Natalie speak. What we know is this. An eighteen-year old girl drank heavily at a bar. She left with three boys about her age. No one saw her again. Her body has yet to be found. 

Ocracoke Post: Vollmann Dreams of Joseph?

Scott Esposito, author of the excellent literary web log Conversational Reading, has spread the word that the next volume of William T. Vollmann’s Seven Dreams series of novels might take on the subject of Chief Joseph:

Vollmann fans will be giddy to hear…that he’s shortly to begin work on the next dream in the Seven Dreams series. He said it will center around the life of Chief Joseph and that he’ll be playing with the chronology, perhaps telling the story backwards. He remarked that this may mean that the story will have a happy ending, something Vollmann stories typically don’t have.

Giddy, indeed. I hope nobody will object to a few notes giving some historical background, which I happen to be interested in at the moment because of a projected essay on a parallel subject I have been developing with a friend. To be clear: I know nothing about the upcoming novel whatsoever, other than that, if the report is accurate, I look forward to reading it. Since the larger meta-narrative of the Seven Dreams series involves the history of the clashes between Native Americans and their white colonizers since the settlement of the New World, it does seem logical that Joseph could become a central figure. His tragic heroism in attempting to save his Nez Perce people from ethnic cleansing in the 1870s is a story American schoolchildren may remember. Evicted from their homeland in the Wallowa valley of what is now Oregon, they attempted to flee to Canada to avoid being forced on to a reservation. Pursued by a much larger force of U.S. Army regulars under the command of the one-armed general Oliver O. Howard, Joseph managed to elude capture for around 1,000 miles through extremely shrewd tactics and maneuvers.

The definitive history of the subject is The Nez Perce Indians and the Opening of the Northwest, written by Alvin M. Josephy, Jr., back in 1965 (Mariner Books reissued the complete and unabridged book in 1997 as a paperback). One of the more remarkable episodes in Josephy’s book involves a photograph taken by William H. Jackson before the 1877 war of a “half-blood with blue eyes and light hair,” who the Nez Perce claimed was the son of William Clark (of Lewis & Clark, the idea being that Clark fathered a son on his travels through the area). Later, when Joseph and the other “non-treaty” remnants who had refused the destruction of their homeland were finally captured in Montana some forty miles away from the Canadian border, by troops under Nelson Miles, they found a old man who was probably the same light-haired person in Jackson’s picture. The story of the photograph, which now resides in my hometown at the Iconographic Collection of the Wisconsin Historical Society, neatly encapsulates the drift down into the abyss of unnecessary and largely unprovoked violence that took place when white settlers replaced more friendly explorers in the Nez Perce homeland. The great tragedy of the Nez Perce was that they, among all the tribes of the West, were the most consistently friendly and accommodating allies of the whites.

Another remarkable dimension of the story is the role of the villain of the piece, General Howard, the man tasked with hunting Joseph down. (Because the Nez Perce had women and children with them, Howard today would be called, properly, a war criminal.) Howard might prove to be an ideal vehicle for Vollmann’s continual exploration of the bad conscience of white mythology. An abolitionist Civil War general who had atrocious luck in battle – losing his arm at in the accidental battle of Fair Oaks, routed by Jackson’s surprise attack at Chancellorsville, and given the worst troops in the worst field position on the first day of Gettysburg – Howard was reliable enough to rise to be one of Sherman’s key subordinates during the March to the Sea. He was one of few Northern military men to write about the suffering inflicted on the civilians of Atlanta, particularly the women (this in an article in Battles and Leaders of the Civil War). After the war, he helped found Howard University for African-Americans, before being posted to the West. Howard, in fact, seemed to have a paradoxical streak in his character whereby he tried to negotiate for the Nez Perce to stay in their homeland at first, but had nothing but contempt for what he saw as the satanic dimensions of Native American religion. What is so terrible about him is that he seemed to have every appearance of being an upright man, even a sympathetic man in some ways during the war.

In his memoir Nez Perce Joseph, Howard tried to justify his actions in a way that followed the commonly-held and relentless logic of dispossession:

There are few Indians in America superior to the Nez Perces. Among them the contrast between heathen and Christian teaching is most marked. Even a little unselfish work, both by Catholic and Protestant teachers, has produced wonderful fruit, illustrated by those who remained on the reservation during the war, and kept the peace; while the unhappy effects of superstition and ignorance appear among the renegades and “non-treaties.” The results to these have been murder, loss of country, and almost extermination. (Brig. Gen. O. O. Howard, “Preface,” Nez Perce Joseph.)

The connection between this fascinating (and awfully frank) statement and the general drift of how Native Americans were loved to death by the Catholic missionaries in Vollmann’s novel Fathers and Crows (the Second of the Seven Dreams) should be pretty clear. How Vollmann handles the story will be doubtless unexpected, unpredictable, and brilliant, as usual. If I had to hazard a single speculative remark (never wise, so advance apologies), I would probably guess that the story won’t be that Howard found ways to fail to capture Joseph. It would diminish Joseph’s military accomplishments to put that idea forward, for one thing. In fact, Howard did fail – mainly because with heavy equipment and logistical problems he couldn’t really keep up in the terrain – and in the end Sherman dispatched Miles’ troops to catch Joseph before he slipped across the border into Canada. Joseph hoped, possibly mistakenly, that Canada would have offered him and his people asylum. After being captured, Joseph made the speech for which he is known to history: “I will fight no more forever…”

The generally-rentable and pretty solid PBS series The West (Episode Six), directed by Stephen Ives and produced by Ken Burns, and written by the perennial Burns collaborator and scholar Geoffrey C. Ward, contains a lot of interesting documentary material on the story.

Selected Minor Works: Kosovo Pole Revisited

Justin E. H. Smith

[For an extensive archive of Justin E. H. Smith’s writing, visit www.jehsmith.com]

In recent years, one of the sights that never fails to drive home to me the fact that I am back in Eastern Europe is that of hordes of travellers rushing to the grand machines in airport departure areas that, for a price, will wrap one’s luggage in multiple layers of clear, environmentally unfriendly plastic.  This is meant to serve as protection, though it must be hell to remove. 

With this image still vivid from a recent voyage, I was amused to read of Milosevic’s posthumous return to Belgrade that “[t]he coffin, wrapped in clear plastic and packing tape, was removed from the jet after the rest of the passengers’ baggage on a small yellow vehicle with a conveyor belt” (New York Times, “Milosevic’s Body Returned to Homeland for Burial,” March 15, 2006).  Finding this gem just before the funeral, I thought to myself: Replace the staid black suit and tie with a shiny track outfit for the ceremonial display, and pipe in some noxious turbofolk to pump up, with the help of a cheap techno beat, the narcissism of minor differences, and there will be no doubt but that in death the ex-Yugoslav dictator has been honored, if not with a state funeral, at least with all the decorations of the post-communist culture of tacky thuggery that Milosevic and his family so shiningly embody.

In 1998, I asked Warren Zimmerman, the recently discharged U.S. ambassador to Yugoslavia, whether the seemingly endless series of violent episodes involving Serbia and its neighbors could be attributed to “deep-seated, historical enmities.”  He rightly said no, and that indeed much of the Clinton administration’s fence-sitting was regrettably motivated by just such an idea.  Slobodan Milosevic often invoked the battle of Kosovo Pole against the Turks in 1389 to justify ongoing slaughter.  Clinton, in turn, emboldened by Robert D. Kaplan’s influential 1993 book, Balkan Ghosts, was happy to invoke similarly distant and semi-mythical events to justify the U.S. position that there’s no point in trying to stop those bloodthirsty Yugoslavs from having it out.

In the late 1990s, I got it into my head to go to Belgrade to interview Milosevic.  It never happened, and this past month I have definitively put my hope of following through to rest.  Back then, I was listening in preparation to instructional casettes of what used to be called “Serbo-Croatian.”  They highlighted the names of foods, and for some reason lay particular emphasis on the fruits.  I learned for example that in Serbia a mango is called a “mango.”  Great. 

I quickly realized that this would not help me to formulate probing questions about who stood to benefit from the privatization of previously state-controlled industries, about the chain of command between Belgrade and Bosnian Serb commandos, etc.  I doubled up my efforts and began to sit in on intensive language courses at Columbia. In the end, the Yugoslav embassy in D.C. held onto my passport far too long.  By the time I got it back, having in the end been declined a visa, I was fairly proficient in Serbo-Croatian –I could now buy a mango while haltingly discussing geopolitics– and the NATO bombing campaign had, at long last, begun. 

This campaign divided those of us who hate war, but also hate the suffering wrought by nasty, opportunistic men propelled into power, whose “sovereignty” is then for some reason thought worthy of respect. To the present day the NATO campaign in Yugoslavia seems to occupy a position halfway between the case of Rwanda, where staying out was a clear abrogation of international responsibility to protect the helpless; and that of Iraq, where humanitarian intervention between a tyrant and his subjects was neither a significant part of the justification for invasion nor, evidently, among the concerns of the invasion’s planners.

The Serbian media have for the most part been at least as reserved in their expression of affection for the deceased former leader as has the New York TimesVreme, Serbia’s own journal of record, assesses Milosevic’s reign as one of incalculable tragedy. Curiously, it seems that Milosevic has received a warmer send-off from the Russian establishment press, but even there his legacy is presented in that dialogical form that often passes for objectivity: “Some say he was the butcher of the Balkans, but some say he was a Serbian national hero.”  We may speculate that this “balance” has something to do with Putin’s increasingly tight control of the media, and his concern for his own legacy as an increasingly iron-fisted ruler.  Russia has given amnesty to Milosevic’s wife and their cretinous son Marko, the one-time patron of Belgrade’s Madona discotheque, whose principle concern in life seems to be collecting sports cars and firearms, and who once announced to Yugoslavia’s Vatican ambassador that he would like to have plastic surgery on his ears, since, as he explained, “I can’t drive an expensive car, dress well, and be floppy-eared like cattle at the same time” (for a hilarious transcript of bugged conversations among the Milosevic clan, see: http://harpers.org/AllInTheFamily.html).

Those who believe that Milosevic could do no wrong appear to include young Marko, wife Mira, a few scattered seniors in Serbia and Russia whose pensions have been cut off, and Ramsey Clark.  All considered, the average age is quite high.  Notwithstanding the depiction widely circulated in the Russian press, of the former ruler as St. Slobodan in the style of an Orthodox icon, and notwithstanding the 50,000 nostalgic gawkers who turned out for the public funeral, it is not likely that the affectionate memory of him will survive for more than the few years most of his supporters have left.

Reading the placards held up by the elderly demonstrators outside the US embassy in Moscow a few weeks ago, one detected an odd persecution complex, as though Western nations have arbitrarily picked out the South and Eastern Slavic peoples for harrassment.  This complex is particularly sharp among some Serbs, who sincerely believe that they are the last line of defense for Christian Europe against the invading Muslim hordes.  As I seem to recall one Serbian warlord saying in the mid-1990s, if it weren’t for the vigilant work of death squads like his, camels would be drinking from the banks of the Seine in no time.

The problem of course is that the Ottoman Empire no longer exists, and in  any case the Kosovo Albanians and the Bosnian Muslims are not foreign invaders.  They are, to use the old, optimistic and all-inclusive language preferred by Marshall Tito, indigenous Yugoslavs, and from the point of view of, say, a Norwegian, they are at least as European as Arkan the warrior and Ceca his turbofolk-singing muse.  Though there is an enduring “Muslim question” in Europe, the landscape has changed somewhat since the original battle of Kosovo Pole, and Milosevic was indulging in nothing but an anachronistic medieval fantasy to make Yugoslav Muslims out as Turkish infidels.

But are the complaints of anti-Serbian bias justified?  To be sure, there is a prevailing sense in the Western media that Serbians are to be collectively punished for the crimes of the warlords and thugs Milosevic oversaw.  Thus in a blurb on the New York Times homepage we read that “The ex-Yugoslav leader’s supporters planned a Belgrade funeral that raised fears of Serbs using the ceremony to try to regain power.”  Serbians regaining  power in Serbia?  The very gall.  In the full article, “Serbs” is lengthened to “nationalist Serbs,” but the slip is telling.  Serbia continues to be vilified as a whole, and probably will be until more serious atonement is made by the Serbian political establishment, and until the deniers of the ethnic-cleansing campaigns are pushed even further to the fringe, where they may congregate harmlessly and irrelevantly, like the friends of David Irving.  It is a good thing that Milosevic was not honored with a state funeral, and if he and his family had been refused the right to return to Serbia now, the ceremonies would likely have only taken place in Russia and stoked the rancid rhetoric there about some pan-Slavic mystical  “brotherhood” which nonetheless excludes the Croats and Slovenes since they abandoned Orthodoxy, or the Cyrillic alphabet, or something.

The irony is that the appeals to ancient blood ties that provide nationalist movements with their fuel are but a flipside of the Clinton-style invocation of intractable ancient blood feuds in the aim of rationalizing staying the isolationist course.  Among national groups, there simply are no natural enemies or natural friends.  Serbs and Kosovo Albanians are not like cats and dogs.  The myth that they are, or that they became so in some  transformative event on a 14th-century battlefield, and are forever condemned to live out the fates that were there secured, has tremendous propaganda value in rallying the troops for current purposes, and this is something that Milosevic well understood.

And this brings us to Iraq, where, in the transition from “terrorist insurrection” to “civil war,” the Americans are increasingly feeling not besieged, but exclued from the action.  Whatever the arguments for withdrawal, and there are many excellent ones, let us not lapse into the Orientalist and vaguely racist fantasy that, whereas we in the enlightened world work out our differences through rational communication, in those parts there’s nothing to be done but to let the Shiites and Sunnis fight it out amongst themselves.  Such reasoning always mistakes the local and short-term for the eternal and fixed.  It’s not in their blood.  It’s in their predicament.

The Ulcer Giver: Helicobacter Pylori

By Dr. Shiban Ganju

Shiban is the chairman of a biotechnology company in India and a practicing gastroenterologist in the USA. He travels between these two spaces frequently but lives in them simultaneously. He has been a passionate theater worker, reluctant army officer, ambitious entrepreneur, successful CEO and an active NGO volunteer. Still, he is does not know what he wants to be when he grows up; but he wants his epitaph to be “He tried.”

PhotonicsA diminutive microbe, Helicobacter Pylori (HP) emerged from obscurity over twenty years ago and squirmed itself into fame and stardom! Since its stomach damaging felony was discovered, it has been accused of causing injury to other precious organs like heart and colon. The scientist sleuths are collecting evidence to indict it; the verdict is not yet in but it is likely that HP will be found guilty on some counts and exonerated of others.

This miniscule, (3 micrometers long), corkscrew like microbe eluded the scientists with diversionary tactics worthy of a hardened felon. HP created the trail of hyper acidity as the cause of ulcer disease and scientists spent decades to unravel the mystery of acid production.

The dogma in ulcer disease stated: stress increases Hydrochloric acid production which in turn erodes the duodenal or gastric (stomach) lining causing an ulcer crater.

Investigators found excessive acid and pepsin production in the stomach of patients with ulcer disease. Other associated culprits — cigarettes, anti inflammatory drugs like aspirin and ibuprofen — shared the blame.

Natural consequence was a multi million dollar business of acid neutralizing and suppressing drugs. Shelf loads of antacids and histamine-2 receptor blockers like Cimetidine (Tagamet) became the standard therapy. Later, the proton pump inhibitors like Omeprazole (Prilosec) and its variants entered the fray to abolish gastric acid.

When medical therapy failed, surgeons wielded their knives, especially for those patients with complications of bleeding, obstructed stomach outlets and indolent ulcers. Surgery involved cutting part of the ulcerated stomach or duodenum and reconnecting the stomach to the jejunum. Prominent surgeon Billroth attained immortality by naming one such procedure after him, only to announce a newer and improved version later that he named Billroth II.

Other surgeons innovated the cutting of the vagus nerve to abolish the stimulus to acid production. But it led to decreased motility of the stomach and stagnation of food, so other surgeons offered a solution by enlarging the gastric outlet opening into the duodenum (pyloroplasty).

So the dogma went on. Books, papers, seminars were devoted to discuss the virtues of one procedure and vices of the other. Newer acid suppressants proliferated and a few generations of gastric surgeons thrived. Meanwhile, some patients improved while others suffered more.

WarrenThe beginning of the end of this mindset came with the discovery in 1983 by Barry Marshal and Robin Warren from Perth, Australia that the cause of gastritis and duodenal ulcer is this cork screw shaped bacterium Helicobacter Pylori. (Campylobacter pyloridis initially) Though this bacterium was found in the stomach lining by many investigators from 1875, it was Marshall and Warren who cultured these bacteria and found them in over 90 percent of duodenal ulcers. Marshall further nailed the etiology by satisfying Koch’s postulates. Koch, a renowned scientist, had suggested earlier that in order to validate an infectious etiology of a disease the following criteria had to be met:

  1. The organism is always associated with disease.
  2. The organism will cause disease in a healthy subject.
  3. Eradication of the organism will cure the disease.
  4. Re-challenge with the organism will cause the disease again.

Barry Marshall swallowed a Petri dish culture of H. Pylori and suffered severe gastritis; he recovered when the bacteria were eradicated and he did not re-challenge.

He satisfied three of the four postulates. After initial skepticism, as befits a dogma, other workers from all over the world replicated these findings. Suddenly ulcers of the stomach and duodenum were cured by simple antibiotic therapy for two weeks. Drs. Marshall and Warren won the Nobel Prize in 1995.

HP turns to be more interesting than a mere ulcer causing nuisance. It has four to six flagella at one end with which it penetrates the mucous layer and approach the gastric wall. The bacterium produces many enzymes including urease which breaks down urea into ammonia and bicarbonate which neutralize the surrounding acid creating a neutral pH cocoon around the bacterium. With glue like surface adhesins, HP clings to the gastric cells. Its secreted enzymes provoke the gastric G cells and D cells which enhance the Hydrochloric acid and pepsin production. An inflammatory response ensues and the lining succumbs to the onslaught of abrasive acid and inflammation. The surface breaks down and forms an ulcer. (Remember how research had shown increased acid production in ulcer patients: the cause was the bug and not stress!)

Investigators have shown that HP is present six times more often with gastric cancer and mucous associated lymphoid tumors (MALT) than in normal stomachs. Eradication of the infection with antibiotics clears the lymphoid tumors. (Here is a stunning example of antibiotics curing cancer!)

HP lives preferentially in the lower part of the stomach and passes in the faeces.The interpersonal transmission, therefore, is presumed to be fecal-oral. Over 50 percent of adults in the developed world carry this bug; the prevalence is higher in the developing countries. The prevalence increases with age.

The microbe is transmitted with in the family and travels with the family; this attribute has been used to study recent migration of human populations. The following example illustrates the point: the Ladakh region occupies northern tip of India and borders Tibet on the east and Kashmir on the west. The population of this region descends from Tibetan and Indo-Iranian stock. While genetically the two populations do not differ, the genomics of H pylori in their stomachs betray their migrations from their respective ancestral lands of Tibet and northwest India.

HP has reminded us again: 1. Microbes rule. 2. “Scientific” dogma can stupefy the mind 3.The dogma may even harm the very patients that are supposed to benefit from such knowledge.

What is the future of this bacterium? All bad things must come to an end! A mathematical model from Stanford suggests that H pylori will be extinct in one hundred years, at least in the USA. Its fifteen minutes of fame will be over.

Helicobacter Pylori, the diminutive flagellate, dispeller of dogma, generator of insight into cancer, tracer of human dislocations gives me “ulcers”, when, as a physician, I encounter patients with surgically mutilated stomachs from a bygone era. I shudder to think that the current “state of the art” in medical practice will be found similarly inadequate in future.

I pray we do no harm in the meantime.

Monday, March 20, 2006

Lunar Refractions: A Wife is Better than a Dog Anyhow

Yes, you’ve read correctly. This won’t be appearing on the front page of the Times, or even amid the increasingly unfortunate and obviously marketing-driven Newsweek covers, though it would likely turn more heads than that recent headline about sex and the single baby-boomer. You’d probably only expect to see it in the “Shouts and Murmurs” column of the New Yorker, where you might safely dismiss it as mere jest. Then again, I’m sure many of my dear readers have had similar, or indeed contrary, thoughts of their own. Yet this reflection was noted by one of the world’s most esteemed scientists, back in July of 1838. While I don’t think that Jcamerondarwin2_1Charles Darwin intended this statement as an evolutionary judgment, it is certainly the point that most stuck with me after looking at a rich collection of his musings.

Upon visiting the American Museum of Natural History’s current exhibition on Darwin last week, I found one piece—nay, hypothesis—by far the most interesting. The show is filled with skeletons; pinned-down, and long-dead, beetles; some unenviable live specimens of species he worked with, displayed at deathlike rest in glass menageries; the requisite, and dare I say relatively passive, interactive computer screen displays; resplendent orchids; manuscripts; and facsimilies of his doodled diagrams. I came across his idea that a wife is “better than a dog anyhow” while reading through his methodical listing of pros and cons regarding the esteemed institution of marriage. This curious sentiment was set quite literally between the lines, with a carat indicating he’d added it afterwards between two other items. The list, neatly folded down the middle and not-so-neatly scrawled in pencil on paper, read as follows, with the heading centered on the page, “Marry” on the left, and “Not Marry” on the right [click on manuscript photo to enlarge]:

Thisisthequestion2_1This is the Question

Marry

Children (if it Please God)
Constant companion (and friend in old age) who will feel interested in one
Object to be beloved and played with. Better than a dog anyhow
Home, & someone to take care of house
Charms of music and female chit-chat
These things good for one’s health—but terrible loss of time
My God, it is intolerable to think of spending one’s whole life, like a neuter bee, working, working, and nothing after all—No, no, won’t do
Imagine living all one’s day solitary in smoky dirty London House
Only picture to yourself a nice soft wife on a sofa with good fire and books and music perhaps
Compare this vision with the dingy reality of Great Marlboro Street, London

Not Marry

Freedom to go where one liked
Choice of Society and little of it
Conversation of clever men at clubs
Not forced to visit relatives and bend in every trifle
Expense and anxiety of children
Perhaps quarrelling
Loss of Time
Cannot read in the evenings
Fatness and idleness
Anxiety and responsibility
Less money for books etc.
If many children forced to gain one’s bread (But then it is very bad for one’s health to work too much)
Perhaps my wife won’t like London; then the sentence is banishment and degradation into indolent, idle fool

Marry, Marry, Marry Q.E.D.

Darwin was twenty-nine when he wrote this, and had been living, presumably in grand bachelor style, in London for almost two years. His five-year voyage on the HMS Beagle was done, and both his age and status brought marriage to mind. Clearly he was torn with the same problem many of my friends (though I must say only the females actually talk about it) are now facing—namely, settle down with one partner and start a family, or pursue career without such compromise. Of course others are facing the dilemma of perhaps passing up on those pros in favor of the cons, after a few (or not so few) years of putting up with such “terrible loss of time.” I’ll not focus on salient, perhaps salacious, details like the fact that Darwin married his first cousin (what would reproductive rules governing gene diversification have to say about that?), and will instead discuss the reverberations his list has in our current society.

Emmadarwinbridgeman2Darwin was set with a “generous living allowance” and flourishing scientific career when he married Emma Wedgwood after a three-month engagement. She was more religiously devout than he, not having put her faith to the rigorous tests inspired by scientific skepticism that he had. Many differences separated the two, yet those were overcome by the presence of the two children and that now most rare of traits, utter devotion and commitment. Clearly he got over most of the cons listed above soon after tying the knot with her. What most interests me, though, is the sentiment that inspired this list and some of Darwin’s letters, and how I see it recurring among my friends and acquaintances 168 years later.

Chascatherinedarwin2_1In a letter to his fiancée written during their engagement in 1839, Darwin explicitly states his hopes and expectations: “I think you will humanize me, & soon teach me there is greater happiness than building theories, & accumulating facts in silence & solitude.” A dear friend of mine, who is an accomplished writer and journalist, has finally decided, after a marriage and two children, followed by an affair or two or three, that, were he to have a choice, working with facts in silence and solitude would rank higher than any sort of companionship. All of his experiences with women up until now had perhaps at one point humanized him, but they have either canceled each other out or just proven to be a bit too much for someone who just wants a “choice of Society and little of it.”

Perhaps this character is similar in its nature to the sort that would prompt another prominent journalist to publish a book entitled Are Men Necessary? While I’ve not yet gotten round to reading Maureen Dowd’s latest book, the many reviews and arguments against or in favor of men’s necessity or superfluity have been impossible to miss. A forty-six-year-old friend of mine has chosen to raise her now six-year-old daughter on her own. After becoming pregnant in the course of a brief affair, she decided that both she and her daughter could get along just fine without a man. I will be curious to see how this develops, especially when the girl hits her teens. Thus far I’ve noted some very interesting forces at work. While I took her on a walk to give my friend a little rest, before letting go of my hand and running up to the swing set as we came to the local playground, she turned to me and asked, “Alta, why don’t you have a little girl?” While offering up my rather vacuous reasons, it occurred to me that, in her eyes, it’s normal that every woman would have a little girl, and therefore strange that I wouldn’t. Just like she has a doll, and her mother has her, I should have a little girl. Her father is present, lives in a neighboring town, and sees her several times a week, but he’s by no means a key figure in her life. This is just one of several emerging models of family that is visible all over the animal world, but seen as new, and by many as a threat, to the contemporary human societal structure.

Darwin shared a lot of his work with his wife; his father had advised him not to recount his religious doubts, noting that some women “suffered miserably” at the idea that their husbands weren’t destined for heaven after death. While they don’t directly relate to the situation between Darwin and his wife, the increasingly “religious” politics of faith, devotion, commitment, and exclusion of unions that aren’t strictly male-female—and hence focused on the propagation of the species (though proponents of such politics seem to forget that this will occur with or without such lofty pretence, especially if abortion is no longer an option)—has become a major issue in the past few years. I don’t really feel like writing about all that, as it makes me rather ill. While evident in this list and in discussions I overhear on a daily basis, the idea that one must choose between companionship or career, and the view that they are mutually exclusive, or at least call for serious compromise, although recorded on Darwin’s list, proved insignificant in the end.

The generation of women who began their careers in the sixties and seventies, and whose stay-at-home mothers almost universally spoke of career only when speaking of their husband’s work, forged new titles for themselves. It was common to hear one woman say of another that she was in college just to get her so-called MRS degree—something that did, and for many people still does, carry more weight than an MFA, MBA, MD, or PhD. That generation quickly came to learn that the academic and professional titles previously inaccessible to them would prove both more difficult and more worthwhile in the long run. The generation of women beginning their careers now, while it might have an inkling of what was and what is to come, cannot relate to this at all, at least not yet.

Partnership of whatever sort seems to bring balance, desired battle, and a reason for being to people that might otherwise be without. The idea of a “better half,” however, has always perturbed me. Perhaps this is only because of its judgmental nature. I recently read an article in which the author related a dialogue, and one of the voices was recorded as her “better half,” which I misinterpreted as the better part of her character. Only when I remembered the definition of “better half” as “spouse” did the article begin making sense (in a non-schizophrenic way). My grandmother would never have had such a misunderstanding.

While I think each item on Darwin’s scientifically rigorous list deserves greater attention—especially the priceless idea of a “nice soft wife on a sofa”—I will close with a nod to recent articles on one of my preferred poets. I recently reread Auden’s “In Sickness and in Health,” many years older and a few experiences richer than when I first read it, when I understood very little. This poem, written for a couple Auden knew, also came to mind as I contemplated Darwin’s list. Many lines acerbically reference marriage as an institution (cf. “Nature by nature in unnature ends”). I especially like the penultimate stanza: “That this round O of faithfulness we swear / May never wither to an empty naught / Nor petrify into a square, / Mere habits of affection freeze our thought / In their inert society, lest we / Mock virtue with its pious parody / And take our love for granted, Love, permit / Temptations always to endanger it.”